-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LivenessProbes not working #26398
Comments
I've also just encountered this. Both readiness probe and liveness probe are failing after installing nginx I have a Kubernetes EKS cluster which is only IPV6. Curious if you have a similar setup, i wonder if its to do with the liveness and readiness probe are not configured correctly for ipv6 |
We run self-managed K8s Clusters on-prem and on cloud infrastructure. This was failing in a dev stage cluster. IPv4 only. |
bitnami/etcd 10.1.1 is also affected by the original issue. |
I think the problem here is that the incorrect port is being queried. |
I have the same problem when disabling rbac and only use client authentication with certificates. There is a simple workaround until this is fixed inside the template: customLivenessProbe:
httpGet:
port: 9090
path: /livez
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
metrics:
useSeparateEndpoint: true |
Thanks a lot @BobVanB To be perfectly blunt I don't see an easy solution if we kept the |
Hi @fmulero I'm not going to touch this topic any further. There is enough discussion about this. With kind regards, |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
Name and Version
bitnami/etcd 10.1.0
What architecture are you using?
amd64
What steps will reproduce the bug?
What is the expected behavior?
The cluster should be up an running stable
What do you see instead?
Pods restart after a short time due to failing http livenessProbes on PodIP:2379/health
Additional information
This seems to be related to #25984 where the livenessProbes changed.
The text was updated successfully, but these errors were encountered: