-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pods become deregistered then after re-register healthstatus reports still registering #2420
Comments
@laurie-kepford, could you provide the following?
Also refer to a similar issue #2366 (comment) |
how you expose your application? NLB or ALB? I am going to deploy the latest version of my application tonight which should stop the containers from restarting. |
@laurie-kepford, if issue persists, do you mind opening a support ticket with AWS support. You could also email your cluster ARN to k8s-alb-controller-triage AT amazon.com. |
So we had a setting in our app that was causing one of the 4 containers inside the pod to restart. We have resolved that and this problem has now resolved. |
It seems that containers become unregistered, maybe because of a pod restart or a host gets terminated and the pod gets moved to the new host.
However:
Result of the following command shows pod is still trying to register: (even many hours later)
kubectl get pod podname -o yaml -n namespace | grep -B7 'type: target-health'
The pod eventually registers but this status stays the same.
This is a production system and I have 200 applications on the system and I see this happening every 4 to 6 hours for different apps. If left alone most of them fix themselves after 5 or 10 minutes but waiting 5 or 10 minutes is not an acceptable solution.
My environment:
EKS - kubernetes version 1.19
Rancher - version 2.6.1
Namespace has label:
AWS LB controller 2.3
The text was updated successfully, but these errors were encountered: