-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AsyncMiddleManServlet response flushing #12323
Comments
If you don't need to mutate content, don't use it, and use instead As for |
I have created a unit test which illustrates the issue. It does not work. |
It's not a bug, perhaps a missing feature.
Your current workaround would be to call There actually exist So your final solution would be: class MyAMMS extends AsyncMiddleManServlet {
protected void writeProxyResponseContent(ServletOutputStream output, ByteBuffer content) throws IOException {
super.writeProxyResponseContent(output, content);
if (output.isReady()) {
output.flush();
}
}
} @iiliev2 do you want to make this contribution? |
I found this old thread where you are discussing how to do async flush:
I can't find any information explaining why that is. Could you please clarify this? I don't see this being part of the servlet spec. I see in the implementation of Do I have to call java.lang.IllegalStateException: isReady() not called: s=OPEN,api=ASYNC,sc=false,e=null
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:712)
at com.iviliev.Servlet$1.onWritePossible(Servlet.java:78)
at org.eclipse.jetty.server.HttpOutput.run(HttpOutput.java:1494)
at org.eclipse.jetty.server.handler.ContextHandler.handle(ContextHandler.java:1469)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:597)
at org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:461)
at Apart from that, I have added an additional test in my repo All tests are now also executed both via http1.1 and http2. For http1.1 the flushing is also needed for the new test to succeed. Otherwise the proxying continues even though the client may be long gone. |
In the email thread you linked, @gregw suggested that flushes are asynchronous, so yes you have to call This is because if you did an asynchronous write, and the write did not complete, then If you perform the sequence: write(large); // does not complete, isReady==false
flush(); // Throws IllegalStateException I have corrected the above comment with the right code, and yes you are occasionally hitting exactly this problem -- sorry I suggested you the incorrect solution. If the call to When an HTTP/1 client aborts, the server is typically not informed. Only by forcing a write the server can detect that the connection is broken. On the contrary, with HTTP/2 the server is informed that the request has been canceled, so it may cancel the request even when no writes are in progress. Just to reiterate: I suggest you do not override |
Are you referring to graceful termination, initiated by the peer(in my test - the client)? I would like to check what would happen if there is an issue with the peer and the RST_STREAM does not reach the proxy. Is there a way to test this with the jetty client? For ex. if there were a bug in the client, aborting requests without properly notifying the proxy and continuing to issue further requests later(ie. not closing the TCP connection). |
I am referring to RST_STREAM. The term "graceful termination" is used when sending GOAWAY, so I would not confuse the two. If the client stops sending frames for a particular stream, eventually the server-side (the proxy in your case) would timeout the stream, and issue a RST_STREAM to the client. |
@iiliev2 let us know if you would like to contribute the improvement discussed above, otherwise we'll do it. |
I can contribute a fix toward the 10.x branch. Is that possible? |
@iiliev2 no, we only accept contributions to the 12.0.x branch. |
@sbordet I just noticed something suspicious.
Am I right, what am I missing? |
Related to this, I still have some doubts that this will be sufficient. If |
Good catch.
Let's assume a previous write actually hit the network, and got TCP congested, so Later, Now, either the Everything should work fine. I'll make the changes for
|
* Fixed `ProxyWriter.chunks` locking. * Made `writeProxyResponseContent()` protected so it can be overridden to flush the response content. * Added test case. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
* Fixed `ProxyWriter` concurrency, now using IteratingCallback to avoid races. * Made `writeProxyResponseContent()` protected so it can be overridden to flush the response content. * Added test case. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
Jetty 12
ee8
Java 21
Continuing the discussion from #12294 (comment) as this is a separate topic.
The proxy servlets(in particular
AsyncMiddleManServlet
, as that is what I am looking at) do not seem to flush responses as the proxy client receives new bytes. Ideally it should auto flush in an optimal way(for ex. if http1.1 and chunked, it should flush on each full chunk). Otherwise there could be huge delays between when the data is available and when it is actually returned to the caller(due to buffering). Since this is a proxy, I expected this to be the default, otherwise I hope to be able to set it up that way.The
ContentTransformer
does not seem to provide a way to control this(suggested in the previous github question). That abstraction is about mutating the raw data in some way.The only way that I can see from the code of
AsyncMiddleManServlet
is to callflush
on theServletOutputStream
right afterProxyWriter
callswriteProxyResponseContent
. Unfortunatly that method is package private so the only way is to override the entireonWritePossible
and flush afterward.What would you advise is the right way to do this?
The text was updated successfully, but these errors were encountered: