-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-23435: workloadctl: account for terminating pods #1732
base: master
Are you sure you want to change the base?
Conversation
@stlaz: This pull request references Jira Issue OCPBUGS-23435, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: stlaz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
/hold cancel |
Don't set Progressing=False if some pods from the previous generation are still running.
@stlaz: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
} else if tooManyMatchingPods { | ||
deploymentProgressingCondition.Status = operatorv1.ConditionTrue | ||
deploymentProgressingCondition.Reason = "PreviousGenPodsPresent" | ||
deploymentProgressingCondition.Message = fmt.Sprintf("deployment/%s.%s: %d pod(s) from the previous generation are still present", workload.Name, c.targetNamespace, len(matchingPods)-int(desiredReplicas)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may not always be correct. There may be extra pods for different reasons. E.g. if they are disrupted/evicted for some reason, the deployment controller will create extra pods to account for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you word it, then? Just too many pods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In k8s, this is considered a complete deployment, so it depends on what you want to convey. There can also be extra pods during a rollout with maxSurge, but in that case the pods will have different revisions/hashes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose the message should be about pods of a different revision still existing. Would you be able to think of any other case where extra pods might cause unexpected behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from the terminating pods which are a subject of this bug, no.
I guess, there might be some exotic cases where the pods can be owned by another controller, but that can be safely ignored here.
// contribute to unexpected behavior if we report Progressing=False. | ||
// The case of too many pods might occur for example if `TerminationGracePeriodSeconds` | ||
// is set. | ||
tooManyMatchingPods := int32(len(matchingPods)) > desiredReplicas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safer would be to test if all the pods have the same hash (pod-template-hash
label). This would work well in combination with workloadIsBeingUpdated
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is pod-template-hash
set on 100% of our deployments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the pod spec can be the same, only the underlying config changed. Would that still work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deployment rollout has to be triggered somehow and it seems that you are triggering it with the resource revisions of the dependencies. So yes it should work.
Don't set Progressing=False if some pods from the previous generation are still running.
/assign @openshift/openshift-team-auth
/cc @deads2k