-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServerConnector with default #selector & acceptors causes DOA jetty on high core count machines #4492
Comments
What is "a high core count machine" to you? We regularly run / build / test / load-test on 64 core machines. (default configurations of Jetty) According to https://aws.amazon.com/ec2/instance-types/c5/ the |
Defaults for 9.4.25AcceptorsBetween 1 and 4 if undeclared. SelectorsBetween 1 and whatever is smaller (Core Count / 2) or (ThreadPool.maxThreads / 16) Defaults for 9.4.5AcceptorsBetween 1 and 4 depending on core count. SelectorsBetween 1 and 4 depending on core count. |
The failures I was seeing were on a physical machine with 72 cores, not a virtualized machine. I was running an automated test for the platform labeler plugin. Same failures were detected with the git client plugin and the git plugin. The issue I was seeing was resolved in all three cases by the change by @jglick . |
high is relative. Normally in our CI we use lower core count machines or containers and did not see this issue. However we did see this issue our windows CI which used 2 containers inside a windows server 2019 in (which was running in a I could not reproduce it on my windows laptop (just 4 cores - 8 including hyperthreading), or in smaller AWS machines. Running directly in windows server 2019 on a You could change the title to Jetty does not service requests with.... on I can provide steps to reproduce if you require. |
Did you get any warning on startup from the thread pool budget? |
To reiterate jenkinsci/jenkins-test-harness#193 (comment), we are not talking here about Jetty performing a bit less than optimally under some heavy production workloads. The situation here is that we were creating a server using defaults Server server = new Server(new ThreadPoolImpl(new ThreadPoolExecutor(10, 10, 10L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(),new ThreadFactory() {…})));
// …
new ServerConnector(server) and, in some environments (but not, say, on my laptop!) Jetty would completely fail to respond to a single request. Just sat there with the socket open. No visibly busy threads (just an idle-looking thread pool), no warnings, nothing. Adding |
@jglick the thread pool configuration is wrong. You have at most 10 threads in the pool which is an incredibly small number for a server, also considering that Jetty steals some for the selectors, the acceptors, etc. Please consider A) using Jetty With only 10 threads, and Jetty using few internally, you probably are out of threads, so that your pool has none to actually serve requests. Is there any reason to underconfigure the thread pool in such a way? |
Again this is just a test service which rarely serves more than one concurrent connection. I am working on verifying whether a stock thread pool also avoids the issue. |
making the following change did indeed resolve the issue - server = new Server(new ThreadPoolImpl(new ThreadPoolExecutor(10, 10, 10L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(),new ThreadFactory() {
- public Thread newThread(Runnable r) {
- Thread t = new Thread(r);
- t.setName("Jetty Thread Pool");
- return t;
- }
- })));
+ server = new Server(); Just checked the javadoc for Server and it is non obvious that the selectors/acceptors steal from the pool. I would suggest leaving this open to
|
@jtnord we do produce a warning for case 1. - the problem is that we can produce a warning only if we know the max number of threads of the thread pool. We have Jetty's But your code is using something different ( About 2. I filed #4493. Thanks! |
On thing that comes up often on stackoverflow regarding Jetty threading is the mistaken assumption that the ThreadPool is only used for request processing. That's an incorrect assumption, the ThreadPool is generic, and can be used for anything that needs a thread within Jetty. |
Also of note, the |
Thanks @joakime , however that gets us back to having hard coded numbers for min/maxt etc as you can only set the |
Jetty knows better than us, so let Jetty configure its own defaults for the ThreadPool (min 8 max 200) and #acceptor/selector threads. This applies feedback from jetty/jetty.project#4492 We loos the thread naming - but all the jetty threads in the servers where called "Jetty Thread Pool"-something so we did not isolate them between the reverse proxy and the JenkinsRule or HudsonTestCase so the fact they are now called qtp-something is pretty much ok. Nothing else seems to use Jetty's QueuedThreadPool in Jenkins so any thread starting qtp is pretty much going to be the jetty thread pool - and it matches more what you would get when running jenkins with java -jar...
The default values are "hardcoded" too in the sense that Your usage of I would recommend that you stick with To sum up: QueuedThreadPool pool = new QueuedThreadPool(/*maxThreads - this is optional*/);
pool.setName("Jetty Thread Pool");
server = new Server(pool); If you really want complete control over the thread name: QueuedThreadPool pool = new QueuedThreadPool() {
@Override
public Thread newThread(Runnable job) {
// Create thread.
}
}; However, note that |
we do not count on a thread name in logs for correlation - given that can change at runtime. If you are doing that I recommend you switch to logging the thread id :) Strangely enough at least the acceptor thread does change the thread name? Thank you for the reference to |
Jetty does change the thread names, for example the thread serving a request will have the request URI in its name. Typically that's ok as @jtnord feel free to close the issue if that's solved for you. |
This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been closed due to it having no activity. |
Jetty version
9.4.6 -> 9.4.25
Java version
various flavours of jdk8 (oracle, opendjk)
OS type/version
Windows and Linux
Description
When using Jetty with a
ServerConnector
created without specifying the number of acceptors and selectors Jetty does not process any incoming requests.This has been observed reliably on a
c5.4xlarge
in AWS.see jenkinsci/jenkins-test-harness#193 for an example.
The symptom of this is that Jetty starts and is listening (as shown by
netstat
) but any requests just timeout.Note: the versions affected range is a tad unreliable. The code works with
jetty-9.4.5.v20170502
and fails with later versions from maven central - but recompiling later jetty versions from a tag with jdk8 appears to resolve the issue (for an unknown reason) - the running JVM is the same in all cases. So I do not believe that code diff betweenjetty-9.4.5.v2017050
andjetty-9.4.5.v20180619
is of any use in order to bisect this issue.The text was updated successfully, but these errors were encountered: