-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MachineDeployment refs and ownerRefs are not upgraded after CAPI upgrade with apiVersion bump #6778
Comments
Duplicate of an issue I created last year (#5470) :( /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen Realized this issue is more comprehensive as it also mentions ownerRefs. |
@sbueringer: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@sbueringer: This issue is currently awaiting triage. If CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @killianmuldoon |
Sorry for the back and forth. I created this issue for ownerRefs: #7224 I've done some additional research on refs and will open a separate issue to address the apiVersion-bump-in-refs topic. /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen Let's use this issue to track the fix for the ref bump in MD and also the implementation for MachinePool
/assign /milestone v1.4 |
@sbueringer: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What steps did you take and what happened:
I ran the clusterctl upgrade e2e test locally (v1alpha4=>v1beta1) and the refs in MD were not upgraded from v1alpha4 to v1beta1.
Some details:
Refs are initially in-memory bumped to v1beta1 through:
cluster-api/internal/controllers/machinedeployment/machinedeployment_controller.go
Lines 215 to 217 in d7bf8df
At some later time we overwrite the in-memory modified version of MD by getting the current one from the server:
cluster-api/internal/controllers/machinedeployment/machinedeployment_sync.go
Lines 133 to 136 in d7bf8df
This drops all the changes we made in the meantime and thus the refs stay on v1alpha4 (or on v1alpha3 depends on with which version the MD was initially created)
Example in CI: https://storage.googleapis.com/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1542205506544734208/artifacts/clusters/clusterctl-upgrade-f4s62o/resources/clusterctl-upgrade/MachineDeployment/clusterctl-upgrade-rlxhz4-md-0.yaml
We need to fix this in a minor CAPI release before we will be able to drop v1alpha3 from our CRDs in a subsequent release!
Additionally we have to verify that after a CAPI upgrade with apiVersion bumps all ownerReferences have been upgraded.
See ownerRef to v1alpha4 Cluster here: https://storage.googleapis.com/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1542205506544734208/artifacts/clusters/clusterctl-upgrade-f4s62o/resources/clusterctl-upgrade/MachineDeployment/clusterctl-upgrade-rlxhz4-md-0.yaml
If we don't do this and just drop the version we run into all kind of fun deadlocks (at the latest when trying to delete a Cluster/MD/...)
P.S. Found this issue when playing around with clusterctl CRD migration and trying to drop v1alpha3 from our CRDs
Environment:
kubectl version
):/etc/os-release
):/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: