-
Notifications
You must be signed in to change notification settings - Fork 686
Upgrade to ElasticSearch v5.0 #60
Comments
Unfortunately, can't do. Running Elasticsearch 5.0.0 on Docker is proving to be really hard. I just can't support having users changing their Kubernetes nodes configuration with |
I think it would still be to allready have a 5.0.0 branch. As far as i know the If you install ES from https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html So there is probally no way around it. |
We are trying to set this via init-container: giantswarm/kubernetes-elastic-stack/manifests/elasticsearch-deployment.yaml#L13 Obviously not a nice thing to change settings on the host by a privileged container. But seems to work for now. |
@pires the init-container solution described above is acceptable for me. Would you consider putting 5.0.0 on a feature branch for now as suggested by @AtzeDeVries? |
Will have to take some time to try it out on a few different setups, namely GKE. |
@pires I would be happy to test on GKE also if you provide a 5.0.0 image. |
OK, I have the changes to be pushed. Will do in the next few hours and ping you. |
@pires the |
I have released an image but I haven't been able to test it - I'm revamping |
I am testing image: quay.io/pires/docker-elasticsearch-kubernetes:5.0.0 but all the ES pods error out with the following log message:
Has anyone been able to successfully run @pires's new version yet? |
I think the error may be related to the installation of the elasticsearch-cloud-kubernetes plugin via
which results in the following error in the output of the docker image build:
Though at least part of these warnings appear to have been there already in the 2.x version, see https://hub.docker.com/r/imelnik/docker-elasticsearch-kubernetes/builds/brzpwudjamraxdechd5xnak/ |
That's not an error but a warning about the fact the plug-in is requiring additional permissions. I will have to give this a try. I will do it later this week. |
@pires I saw the image 5.0.0 in pires / docker-elasticsearch-kubernetes. |
I was able to run it. I am updating this repo later today, if I'm able to reproduce my local setup on GKE. |
@aeneaswiener it kinda works for me:
I say kinda because it's killed from time to time. I used the init-container as per @puja108 instructions. |
Actually, I just realized I'm having the same issue as you do but after a while, and a couple pod restarts, it just works! |
If I have cluster running 2.4 in GKE, how can I upgrade it to 5.0? |
To anyone trying this, i released |
@pires I try with new image in my GKE cluster, everything is working fine.
and of course, init container to set |
How many instances of each do you have? How much memory did you set? Also is your init container as someone suggested above? |
Just FYI, as ES storage can get pretty big with time, we also added a Scheduled Job running the ES Curator once a day to clean up old indices: https://github.com/giantswarm/kubernetes-elastic-stack/blob/master/manifests/curator-scheduledjob.yaml The config is kept in a Config Map: https://github.com/giantswarm/kubernetes-elastic-stack/blob/master/manifests/curator-configmap.yaml |
@puja108 that's really cool and it would be awesome to have that as an add-on to this repo, if you're willing to contribute it. |
Will do a PR |
Has anyone tried the latest 5.0.1 image and found it to work or any issues? |
I'm able to deploy and the cluster comes up properly. But, when i scale the data node (any node for that matter), it failed with the below message
Can you please add an environment variable for "node.max_local_storage_nodes"? |
Sure. Can you open an issue on github.com/pires/docker-elasticsearch? |
Hi,
We have successfully deployed elastic search 5.0 on kubernetes.
Thanks.
…On Fri, 25 Nov 2016 at 7:51 PM, Paulo Pires ***@***.***> wrote:
Sure. Can you open an issue on github.com/pires/docker-elasticsearch?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#60 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEUV8DAqyEsNVGM9rxeb09epWe3Y6SWzks5rBu7jgaJpZM4Khdlt>
.
|
The
Trying to figure out a work around. |
False alarm. Destroying and recreating pods appears to have applied the |
For posterity, with kubernetes 1.6 it seems they moved it out of beta and the syntax is now: spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true |
Done. |
No description provided.
The text was updated successfully, but these errors were encountered: