-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share netty event loops between transports #46346
Conversation
Currently Elasticsearch creates independent event loop groups for each transport (http and internal) transport type. This is unnecessary and can lead to contention when different threads access shared resources (ex: allocators). This commit moves to a model where, by default, the event loops are shared between the transports. The previous behavior can be attained by specifically setting the http worker count.
Pinging @elastic/es-distributed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left some comments around the lifecycle of these groups
.../transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java
Outdated
Show resolved
Hide resolved
modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java
Outdated
Show resolved
Hide resolved
modules/transport-netty4/src/main/java/org/elasticsearch/transport/SharedGroupFactory.java
Outdated
Show resolved
Hide resolved
modules/transport-netty4/src/main/java/org/elasticsearch/transport/SharedGroupFactory.java
Outdated
Show resolved
Hide resolved
modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java
Show resolved
Hide resolved
modules/transport-netty4/src/main/java/org/elasticsearch/transport/SharedGroupFactory.java
Show resolved
Hide resolved
I updated this for the comments. I'm not sure what to do on the thread count thing. I know in the team-discuss we said maybe we should reduce the make the minimum at least 4 (because 4 was theoretically the single core minimum before). But the setting allows the user to configure a minimum of 1 which means I would be breaking that. Are we okay with that? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tbrooks8 can you merge latest master here? I've left a 2 minor comments, o.w. looking good.
modules/transport-netty4/src/main/java/org/elasticsearch/transport/SharedGroupFactory.java
Outdated
Show resolved
Hide resolved
assertFalse(transportGroup.getLowLevelGroup().isShuttingDown()); | ||
transportGroup.shutdown(); | ||
assertTrue(httpGroup.getLowLevelGroup().isShuttingDown()); | ||
assertTrue(transportGroup.getLowLevelGroup().isShuttingDown()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also check the termination future? i.e. whether the shutdown has actually completed here?
modules/transport-netty4/src/main/java/org/elasticsearch/transport/SharedGroupFactory.java
Show resolved
Hide resolved
I have addressed the previous comments on this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM =>Thanks Tim!
Currently Elasticsearch creates independent event loop groups for each transport (http and internal) transport type. This is unnecessary and can lead to contention when different threads access shared resources (ex: allocators). This commit moves to a model where, by default, the event loops are shared between the transports. The previous behavior can be attained by specifically setting the http worker count.
In elastic#46346 we changed the distribution of work across event loops but this was marked as a `>non-issue` so did not get a mention in the release notes. However, plugin authors might need to be aware of this change, so this commit records it in the release notes as an enhancement instead. Closes elastic#67960
In #46346 we changed the distribution of work across event loops but this was marked as a `>non-issue` so did not get a mention in the release notes. However, plugin authors might need to be aware of this change, so this commit records it in the release notes as an enhancement instead. Closes #67960
In #46346 we changed the distribution of work across event loops but this was marked as a `>non-issue` so did not get a mention in the release notes. However, plugin authors might need to be aware of this change, so this commit records it in the release notes as an enhancement instead. Closes #67960
In #46346 we changed the distribution of work across event loops but this was marked as a `>non-issue` so did not get a mention in the release notes. However, plugin authors might need to be aware of this change, so this commit records it in the release notes as an enhancement instead. Closes #67960
In #46346 we changed the distribution of work across event loops but this was marked as a `>non-issue` so did not get a mention in the release notes. However, plugin authors might need to be aware of this change, so this commit records it in the release notes as an enhancement instead. Closes #67960
Currently Elasticsearch creates independent event loop groups for each
transport (http and internal) transport type. This is unnecessary and
can lead to contention when different threads access shared resources
(ex: allocators). This commit moves to a model where, by default, the
event loops are shared between the transports. The previous behavior can
be attained by specifically setting the http worker count.