-
Notifications
You must be signed in to change notification settings - Fork 24.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow max_map_count=65536 on single nodes for Docker containers #45815
Comments
This setting has created tremendous headaches and MANY issues (in ElasticSearch and many other projects, SonarQube, VMWare Photon, and any other projects using ElasticSearch in their Docker containers). All of which could have been fixed by allowing max_map_count to be at 65536 and properly programming to account for memory usage or reduce memory usage. (sorry @jasontedor I know you have been trying to address the memory issue for a while now). |
List of issues that would probably have been resolved by max_map_count = 65536 instead of 262144 and a new process starting if more maps are needed. elastic/elasticsearch-docker#92 And so many more… May have started since the memory usage was increased to 2G |
I think memory usage needs to be addressed and has been patched and "worked-around" too long. Out Of Memory needs to trigger some Garbage collection, and/or possibly a new process (which may be yet another work-around if the issue is Memory Leaks). |
I think there is some misunderstanding here in that Along these lines then and I'm not sure if you're aware but I made a change a few months ago to not enforce this check if you're willing to forego the use of The PR that added that is #32421, and this is documented in the relevant bootstrap check. |
My understanding could be way off. I assumed that something was using up the memory maps and not releasing them thus the comment about garbage collection and also about OOM errors. |
I still feel that starting a new process is a better solution if more memory maps are needed because the current process is exceeding the system configured limit. I think that is a better solution than changing system-wide settings to increase memory maps per process. It seems like a more scalable solution. What happens when 262144 isn't enough? |
Pinging @elastic/es-core-infra |
https://jira.sonarsource.com/browse/SONAR-12264 |
This can be closed, as @jasontedor solution of disabling mmapfs and using only NIOFS stops the bootstrap check for max_map_count. Thanks @jasontedor |
We tried to set the limit to ensure that all reasonable use cases would never run out of memory maps. Since we set the limit at 2^18, I have never seen an instance of a node running out of memory maps (but we did see them at 2^16). I think that from a scalability perspective, with 2^18 maps, a node will run into other scalability issues long before it runs out of memory maps. It's possible that we could consider reducing this limit now that we default to a smaller number of shards, and have employed other strategies to keep the number of shards on a single node down, but it would require quite some investment to be sure of this. I understand that from a logistical perspective it can be difficult to increase the number of memory maps when you don't own the underlying platform, and that's why we've implemented other ways to avoid this check, for users that can accept the trade offs. Given that, I'm going to close this issue. Thanks for the discussion here. |
Describe the feature:
Please add a feature to allow max_map_count to be the system default of 65536 for running single nodes in a Docker Container. Currently, the only way to run ElasticSearch in a Docker Container is to set the host vm.max_map_count = 262144. This is impossible (and likely unnecessary) in some environments.
SonarQube currently uses ElasticSearch, and trying to run it in a container fails at startup if this is not set on the host due to bootstrap checks. However, once the container is up and running it continues to run fine if the setting on the host is reverted to 65536.
If this many map counts are really needed to avoid an out of memory situation, then perhaps another alternative would be to create a new process to handle the extra resource requirement rather than requiring a system wide change. This seems like a poor practice to require a system wide change (which may negatively impact system stability and security).
Elasticsearch version (
bin/elasticsearch --version
):Plugins installed: []
JVM version (
java -version
):OS version (
uname -a
if on a Unix-like system):Description of the problem including expected versus actual behavior:
Steps to reproduce:
Please include a minimal but complete recreation of the problem, including
(e.g.) index creation, mappings, settings, query etc. The easier you make for
us to reproduce it, the more likely that somebody will take the time to look at it.
Provide logs (if relevant):
The text was updated successfully, but these errors were encountered: