-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-23435: workloadctl: account for terminating pods #1732
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -262,6 +262,21 @@ func (c *Controller) updateOperatorStatus(ctx context.Context, previousStatus *o | |
desiredReplicas = *(workload.Spec.Replicas) | ||
} | ||
|
||
selector, err := metav1.LabelSelectorAsSelector(workload.Spec.Selector) | ||
if err != nil { | ||
return fmt.Errorf("failed to construct label selector: %v", err) | ||
} | ||
matchingPods, err := c.podsLister.List(selector) | ||
if err != nil { | ||
return err | ||
} | ||
// Terminatig pods don't account for any of the other status fields but | ||
// still can exist in a state when they are accepting connections and would | ||
// contribute to unexpected behavior if we report Progressing=False. | ||
// The case of too many pods might occur for example if `TerminationGracePeriodSeconds` | ||
// is set. | ||
tooManyMatchingPods := int32(len(matchingPods)) > desiredReplicas | ||
|
||
// If the workload is up to date, then we are no longer progressing | ||
workloadAtHighestGeneration := workload.ObjectMeta.Generation == workload.Status.ObservedGeneration | ||
workloadIsBeingUpdated := workload.Status.UpdatedReplicas < desiredReplicas | ||
|
@@ -274,6 +289,10 @@ func (c *Controller) updateOperatorStatus(ctx context.Context, previousStatus *o | |
deploymentProgressingCondition.Status = operatorv1.ConditionTrue | ||
deploymentProgressingCondition.Reason = "PodsUpdating" | ||
deploymentProgressingCondition.Message = fmt.Sprintf("deployment/%s.%s: %d/%d pods have been updated to the latest generation", workload.Name, c.targetNamespace, workload.Status.UpdatedReplicas, desiredReplicas) | ||
} else if tooManyMatchingPods { | ||
deploymentProgressingCondition.Status = operatorv1.ConditionTrue | ||
deploymentProgressingCondition.Reason = "PreviousGenPodsPresent" | ||
deploymentProgressingCondition.Message = fmt.Sprintf("deployment/%s.%s: %d pod(s) from the previous generation are still present", workload.Name, c.targetNamespace, len(matchingPods)-int(desiredReplicas)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This may not always be correct. There may be extra pods for different reasons. E.g. if they are disrupted/evicted for some reason, the deployment controller will create extra pods to account for that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How would you word it, then? Just too many pods? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In k8s, this is considered a complete deployment, so it depends on what you want to convey. There can also be extra pods during a rollout with maxSurge, but in that case the pods will have different revisions/hashes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suppose the message should be about pods of a different revision still existing. Would you be able to think of any other case where extra pods might cause unexpected behavior? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Apart from the terminating pods which are a subject of this bug, no. I guess, there might be some exotic cases where the pods can be owned by another controller, but that can be safely ignored here. |
||
} else { | ||
deploymentProgressingCondition.Status = operatorv1.ConditionFalse | ||
deploymentProgressingCondition.Reason = "AsExpected" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safer would be to test if all the pods have the same hash (
pod-template-hash
label). This would work well in combination withworkloadIsBeingUpdated
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is
pod-template-hash
set on 100% of our deployments?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it is
https://github.com/kubernetes/kubernetes/blob/0590bb1ac495ae8af2a573f879408e48800da2c5/pkg/controller/deployment/sync.go#L191
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the pod spec can be the same, only the underlying config changed. Would that still work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deployment rollout has to be triggered somehow and it seems that you are triggering it with the resource revisions of the dependencies. So yes it should work.
https://github.com/openshift/cluster-authentication-operator/blob/b415439ebab2829c8da1ea17c05f2ac75fe5dbe8/pkg/controllers/deployment/default_deployment.go#L54