Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable mmap check when using NIOFS #32267

Closed
jibbe opened this issue Jul 22, 2018 · 5 comments
Closed

Disable mmap check when using NIOFS #32267

jibbe opened this issue Jul 22, 2018 · 5 comments
Assignees
Labels
:Core/Infra/Core Core issues without another label

Comments

@jibbe
Copy link

jibbe commented Jul 22, 2018

Having a hard time trying to work out a solution to running the ElasticSearch image in Kubernetes non-privileged container environment. A.k.a Openshift Online or Google Kubernetes Engine. It's not possible to set vm.max_map_count due to lack of permissions using an init-container, unless you run your own kubernetes platform and allow privileged containers.

After some investigation I found that you can use NIO instead of mmap:
index: store: type: niofs
I tried that but the boostrap checks still fired for the MaxMapCountCheck. I tried setting "es.enforce.bootstrap.checks" to false (not ideal) on the container, but it throws the exception:
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: [es.enforce.bootstrap.checks] must be [true] but was [false]

Looking at the code the MaxMapCountCheck is hardwired in and the es.enforce.bootstrap.checks logic has some extra assumptions built in.

If using niofs should I expect this check to occur?

It would be ideal if I could just disable the specific MaxMapCountCheck.

Willing to tackle a PR if necessary. There are probably several approaches.

Trying to avoid the possibility of having to run a dedicated kubernetes platform.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

@colings86 colings86 added the :Distributed/Network Http and internode communication implementations label Jul 22, 2018
@jasontedor jasontedor added :Core/Infra/Resiliency Keep running when everything is ok. Die quickly if things go horribly wrong. :Core/Infra/Core Core issues without another label and removed :Distributed/Network Http and internode communication implementations :Core/Infra/Resiliency Keep running when everything is ok. Die quickly if things go horribly wrong. labels Jul 23, 2018
@jibbe jibbe changed the title Disable mmap check when using NIO Disable mmap check when using NIOFS Jul 23, 2018
@jasontedor
Copy link
Member

jasontedor commented Jul 25, 2018

index.store.type is a per-index setting which means that even if all existing indices are created with index.store.type set to niofs, later indices could be created with the default setting of mmapfs which could still run into a problem of not having enough virtual address space or allowed memory maps (the point of the max map count check). Moreover, index settings are not readily available to us when executing the bootstrap checks.

I think that we can and should solve this though. I think that we can add a new setting which enumerates the permitted index store types. If mmapfs is not included in the permitted index store types, then we can skip the bootstrap checks that ensure that we have sufficient limits for memory mapping files. We will then not allow indices with a non-permitted store type.

However, this will require a refactoring of how plugins provide additional store types, otherwise we would not be able to validate the list of permitted store types.

@jibbe
Copy link
Author

jibbe commented Jul 25, 2018

That's a great approach and thanks for the interest (and work already).

I recompiled 6.3.1 and built the ES docker image with the maxmapcheck commended out for a test (it took a while to set up that dev environment, mostly around the ES docker build process). I then extended that image for config and have it deployed into Openshift Origin (with no privileged containers) for a 3 pod master deployment. That seems to work and the masters see each other. Still working on a StatefulSet for the data pods to verify it will all work.

I'm basing the kubernetes config off https://github.com/pires/kubernetes-elasticsearch-cluster, however with the official ES docker image and will probably convert to Openshift DeploymentConfig for imagestream support. Getting this working in Openshift Online will be a HUGE bonus for us.

@jibbe
Copy link
Author

jibbe commented Aug 21, 2018

you rock @jasontedor !! thankyou.

will build new images of master and test shortly.

@jasontedor
Copy link
Member

Thanks a lot for that @jibbe. Please let us know how the testing goes, this use-case is important to us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label
Projects
None yet
Development

No branches or pull requests

4 participants