-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance status for deploymentconfigs #6233
Conversation
@@ -137,7 +138,10 @@ func (c *DeploymentConfigController) Handle(config *deployapi.DeploymentConfig) | |||
return fmt.Errorf("couldn't create deployment for deployment config %s: %v", deployutil.LabelForDeploymentConfig(config), err) | |||
} | |||
c.recorder.Eventf(config, "DeploymentCreated", "Created new deployment %q for version %d", created.Name, config.LatestVersion) | |||
return nil | |||
// Update deploymentConfig status; we probably need a separate call for status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're introducing the idea of a status
, let's introduce a status
at least internally. Since status
should be read-only, we can be compatible by duplicating existing fields in two locations.
Does the deployer pod update logical status? If so, then we'll need to need to respect changes on it if the status
stanza is missing. If not and its only the baked in controller, then we should be good to ignore status updates on old fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deployer pod updates the status on the replication controller, it never touches the config. I think that status
would be internal only, @ironcladlou thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deployer pod updates the status on the replication controller, it never touches the config. I think that status would be internal only, @ironcladlou thoughts?
I'm fine with a deployer pod updating status. I just wanted to make sure we handled it properly for backwards compatibility. Sounds like we don't have to worry about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The strategies just scale RCs up and down and don't touch the config. They also have no reason to read the new field via public API or anything. So I don't think introducing status to the config should have an effect outside the controllers.
Replicas int | ||
|
||
// Selector is a label query over pods that should match the Replicas count. | ||
Selector map[string]string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this new? What was the existing field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can a single selector select all the replicas we care about? What if you are rolling between two deployments which resulted from two versions of the config, and have different selectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can a single selector select all the replicas we care about? What if you are rolling between two deployments which resulted from two versions of the config, and have different selectors?
I asked @Kargakis to split this into one pull for making spec/status with no other changes and then a separate pull to add the changes. I think this is new, so that should help us find and see what this is doing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new fields were behind DeploymentTemplate previously.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new fields were behind DeploymentTemplate previously.
Ah, definitely interested in that separate pull then. Checking congruence in one and then reasoning about changes in another..
Couldn't we have updated the Selector
of the DC and orphaned existing pods for an old RC created by this DC?
@@ -137,7 +138,10 @@ func (c *DeploymentConfigController) Handle(config *deployapi.DeploymentConfig) | |||
return fmt.Errorf("couldn't create deployment for deployment config %s: %v", deployutil.LabelForDeploymentConfig(config), err) | |||
} | |||
c.recorder.Eventf(config, "DeploymentCreated", "Created new deployment %q for version %d", created.Name, config.Status.LatestVersion) | |||
return nil | |||
// Update deploymentConfig status; we probably need a separate call for status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened #6465 for a separate status update call.
oldReplicas := deploymentConfig.Spec.Replicas | ||
deploymentConfig.Spec.Replicas = scale.Spec.Replicas | ||
if err := r.registry.UpdateDeploymentConfig(ctx, deploymentConfig); err != nil { | ||
return nil, false, err | ||
} | ||
|
||
scaleRet.Spec.Replicas = scale.Spec.Replicas | ||
scaleRet.Status.Replicas = totalReplicas + (scale.Spec.Replicas - oldReplicas) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With @ironcladlou's change to how DeploymentConfig scaling works, I don't think this calculation is accurate any more. We should just keep TotalReplicas == oldReplicas
, since the replica count on RCs controlled by the DC won't change until the DC controller updates them.
#6040 flake |
|
// TODO: Use a separate call for updating the status of a DC | ||
// https://github.com/openshift/origin/issues/6465 | ||
config.Status.Replicas = kdeputil.GetReplicaCountForRCs(deployments) | ||
_, err := c.osClient.DeploymentConfigs(config.Namespace).Update(config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ironcladlou won't this result in a conflict if an older client had scaled the DC (L220)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right- shouldn't we be reassigning config local after the update on L220?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right- shouldn't we be reassigning config local after the update on L220?
Not finding the conflict in a test, suggests we have a test missing.
It seems like this update is more properly a patch until we have separate status updating.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right- shouldn't we be reassigning config local after the update on L220?
Would using a copy of the dc in the last update call not conflict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not finding the conflict in a test, suggests we have a test missing.
We need to use older clients in our tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or in this specific case we could emulate an older client by manually updating the latest rc for a dc
This API change is approved. |
Rebased on top of the latest controller changes, shared a pod cache, added an extended test. This will need another look @mfojtik @ironcladlou @smarterclayton @deads2k |
[test] |
Evaluated for origin test up to 0e85054 |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5286/) |
LGTM |
[merge] |
This will also enable |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5286/) (Image: devenv-rhel7_4443) |
Evaluated for origin merge up to 0e85054 |
Add various status fields for enhancing user experience. The fields are
also required for round-tripping upstream Deployments.
@ironcladlou @openshift/api-review PTAL
Fixes #5918
Part of https://trello.com/c/Mljldox7/643-5-deployments-downstream-support-for-upstream-features
@mfojtik @pweil- fyi