-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress Pods Fail to Start #655
Comments
@briananstett could you try to use a different namespace that it might be due to that. |
Thank you for the reply @oktalz. I created a new namespace on my EKS cluster (version 1.30) and installed a fresh installation of HAProxy using the Helm chart and all the default values.
The same behavior begins to happen immediately though where the pods are crashing with a 137.
and pod logs...
If it's helpful, I also created a new node with taints only allowing the HAProxy workloads to run on it. I updated my HAProxy deployment to have the appropriate Node Selectors and Tolerations to get the pods to run on that isolated node. Attached are logs from the kubelet and contanierd of that node. |
Sorry to bug you @oktalz, but did you have any other ideas? I'm now seeing this problem with all of my HAProxy workloads across multiple EKS clusters. I'm sort of out of ideas of things to try. |
Just some more information, sorry if this is irrelevant. I noticed in a working HAProxy pod, this is the running processes.
But in one of my broken HAProxy pod, the running process are this:
And just logs
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I'm experiencing a very strange behavior where sometimes an HAProxy Kubernetes Ingress pod fails to start and begins to crash loop. The initial kubectl describe output seems to point to an issue with the startup probe failing. The issue seems to be sporadic and requires a "fresh" new node that has never had a HAProxy Ingress Controller pod on it before to resolve the issue.
(Initial describe output)
But when I adjust the startup probe configuration to allow for more startup time, the pods still continue to crash immediately but with a 137 exit code and s6-overlay error logs.
(Altered starup probe configuration)
(kubectl describe output from after probe update)
(container logs)
I've tried different versions of the HAProxy Ingress controller, updating Kubernetes versions, updating node AMIs, altering resource allocations (trying to address the
137
exit code), removing security contexts, and more with no luck. Oddly, I'm only having this issue on one EKS clusters I'm running. The exact same installation works on a different EKS cluster running the same version and configuration.Specs
The text was updated successfully, but these errors were encountered: