Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CAPI controllers should keep ownerRefs on the current apiVersion #7224

Closed
4 tasks
Tracked by #8038
sbueringer opened this issue Sep 15, 2022 · 13 comments · Fixed by #8256 or #9269
Closed
4 tasks
Tracked by #8038

CAPI controllers should keep ownerRefs on the current apiVersion #7224

sbueringer opened this issue Sep 15, 2022 · 13 comments · Fixed by #8256 or #9269
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@sbueringer
Copy link
Member

sbueringer commented Sep 15, 2022

I took a closer look at our ownerRefs and how they evolve (or not) after ClusterAPI upgrades. In some cases we bump the apiVersions automatically in some cases we don’t. Our cluster deletion reconciliation works independent of the apiVersion, but the Kubernetes garbage collection does not.

An Example:

  • MD > MS > M is created with v1alpha3
  • ownerRefs in MS and M are set with v1alpha3
  • CAPI is upgraded a few times => ownerRefs stay on v1alpha3
  • Eventually CAPI is upgraded to a version which doesn’t have v1alpha3
  • => Kubernetes garbage collection is broken

Now let's take a look at how MachineDeployment deletion usually works:

  • MD is deleted be the user
  • kube-controller-manager garbage collector triggers deletion of MS when MD is gone
  • MS is deleted
  • kube-controller-manager garbage collector triggers deletion of Machines when MS is gone

Let's now assume the case from above. MS has an ownerRef from MD with an apiVersion that doesn't exist anymore (e.g. v1alpha3 in the example). The MD deletion now won't work anymore:

  • MD is deleted
  • MS still exists
    • The kube-controller-manager garbage collector is not able to “get” the MD in the old apiVersion

This seems like a bug to me and a blocker for removal of v1alpha3 apiVersions. I don't know for which resources exactly we depend on the Kubernetes garbage collection but I think we should keep all ownerRefs we set on the current apiVersion.

Proposed tasks:

  • Probably makes sense to investigate if there is guidance on how to manage ownerRefs upstream
  • audit all controllers/resources to figure out which ones are setting ownerRefs
  • fix controllers to keep ownerRefs up-to-date (one example is the MD controller)
  • broadcast this issue in office hours as it might be helpful for providers

It would be nice to have this implemented for v1.3.0 so we can consider dropping v1alpha3/v1alpha3 in the next releases.

/kind feature

@sbueringer sbueringer added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 15, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 15, 2022
@sbueringer
Copy link
Member Author

cc @killianmuldoon Just in case you're interested

@killianmuldoon
Copy link
Contributor

/assign

I'll take a look at this

@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 16, 2022
@killianmuldoon
Copy link
Contributor

It seems there is no upstream solution to this problem - it's been tracked in this issue for the last couple of years:
kubernetes/kubernetes#96650

It would be great if we could get a solution that would also work in Controller Runtime, or in k/k itself to solve this problem across the community, but the first step here might just be to solve it in CAPI e.g. by upgrading references on every reconcile if required.

@sbueringer
Copy link
Member Author

Yup - as discussed, I agree I think we have to start with a solution for CAPI.

@srm09
Copy link
Contributor

srm09 commented Nov 7, 2022

Replicating this in CAPV as an investigation issue. 💯

@sbueringer
Copy link
Member Author

/milestone v1.4

Given we are planning to implement this for v1.4

@k8s-ci-robot k8s-ci-robot added this to the v1.4 milestone Jan 25, 2023
srm09 added a commit to srm09/cluster-api-provider-vsphere that referenced this issue Feb 1, 2023
During the VM reconciliation, the controllers track down the parent of
the Machine objects using the owner reference hierarchy. The owner
reference checks were comparing the Group as well as the API version to
determins and fetch the owner.

Currently, the API version used for comparison was selected from the
storageVersion of the CAPI objects on the API server. Due to this, any
objects created via an earlier CAPI version, say v1alpha3, which sets
the owner reference API version to group name + v1alpha3 were not being
found since no such object would exist. A similar object with the
updated API type is stored in the api server.

This drify happens since CAPI does not update the owner references of
the objects when moving to newer API versions. There is an issue
tracking this kubernetes-sigs/cluster-api#7224
which will solve the stale owner ref API version problem. Until then we
are dropping the version check and just relying on the group info and
kind combination to fetch the parent object.

Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
srm09 added a commit to srm09/cluster-api-provider-vsphere that referenced this issue Feb 1, 2023
During the VM reconciliation, the controllers track down the parent of
the Machine objects using the owner reference hierarchy. The owner
reference checks were comparing the Group as well as the API version to
determins and fetch the owner.

Currently, the API version used for comparison was selected from the
storageVersion of the CAPI objects on the API server. Due to this, any
objects created via an earlier CAPI version, say v1alpha3, which sets
the owner reference API version to group name + v1alpha3 were not being
found since no such object would exist. A similar object with the
updated API type is stored in the api server.

This drify happens since CAPI does not update the owner references of
the objects when moving to newer API versions. There is an issue
tracking this kubernetes-sigs/cluster-api#7224
which will solve the stale owner ref API version problem. Until then we
are dropping the version check and just relying on the group info and
kind combination to fetch the parent object.

Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
srm09 added a commit to srm09/cluster-api-provider-vsphere that referenced this issue Feb 1, 2023
During the VM reconciliation, the controllers track down the parent of
the Machine objects using the owner reference hierarchy. The owner
reference checks were comparing the Group as well as the API version to
determins and fetch the owner.

Currently, the API version used for comparison was selected from the
storageVersion of the CAPI objects on the API server. Due to this, any
objects created via an earlier CAPI version, say v1alpha3, which sets
the owner reference API version to group name + v1alpha3 were not being
found since no such object would exist. A similar object with the
updated API type is stored in the api server.

This drify happens since CAPI does not update the owner references of
the objects when moving to newer API versions. There is an issue
tracking this kubernetes-sigs/cluster-api#7224
which will solve the stale owner ref API version problem. Until then we
are dropping the version check and just relying on the group info and
kind combination to fetch the parent object.

Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
srm09 added a commit to srm09/cluster-api-provider-vsphere that referenced this issue Feb 1, 2023
During the VM reconciliation, the controllers track down the parent of
the Machine objects using the owner reference hierarchy. The owner
reference checks were comparing the Group as well as the API version to
determins and fetch the owner.

Currently, the API version used for comparison was selected from the
storageVersion of the CAPI objects on the API server. Due to this, any
objects created via an earlier CAPI version, say v1alpha3, which sets
the owner reference API version to group name + v1alpha3 were not being
found since no such object would exist. A similar object with the
updated API type is stored in the api server.

This drify happens since CAPI does not update the owner references of
the objects when moving to newer API versions. There is an issue
tracking this kubernetes-sigs/cluster-api#7224
which will solve the stale owner ref API version problem. Until then we
are dropping the version check and just relying on the group info and
kind combination to fetch the parent object.

Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
k8s-infra-cherrypick-robot pushed a commit to k8s-infra-cherrypick-robot/cluster-api-provider-vsphere that referenced this issue Feb 2, 2023
During the VM reconciliation, the controllers track down the parent of
the Machine objects using the owner reference hierarchy. The owner
reference checks were comparing the Group as well as the API version to
determins and fetch the owner.

Currently, the API version used for comparison was selected from the
storageVersion of the CAPI objects on the API server. Due to this, any
objects created via an earlier CAPI version, say v1alpha3, which sets
the owner reference API version to group name + v1alpha3 were not being
found since no such object would exist. A similar object with the
updated API type is stored in the api server.

This drify happens since CAPI does not update the owner references of
the objects when moving to newer API versions. There is an issue
tracking this kubernetes-sigs/cluster-api#7224
which will solve the stale owner ref API version problem. Until then we
are dropping the version check and just relying on the group info and
kind combination to fetch the parent object.

Signed-off-by: Sagar Muchhal <muchhals@vmware.com>
@sbueringer
Copy link
Member Author

Afaik will be fixed by #8256
(@killianmuldoon please add fixes to the PR description if I'm right)

@killianmuldoon
Copy link
Contributor

/reopen

This is no longer tested in CAPI e2e tests on main as we are no longer upgrading apiVersions in the clusterctl_upgrade tests.

We should consider whether to try to continuously test this - e.g. by create a v1beta2 apiVersion for testing, or whether it's alright to leave this until there is a future apiVersion to upgrade to.

@k8s-ci-robot k8s-ci-robot reopened this Aug 9, 2023
@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: Reopened this issue.

In response to this:

/reopen

This is no longer tested in CAPI e2e tests on main as we are no longer upgrading apiVersions in the clusterctl_upgrade tests.

We should consider whether to try to continuously test this - e.g. by create a v1beta2 apiVersion for testing, or whether it's alright to leave this until there is a future apiVersion to upgrade to.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member Author

sbueringer commented Aug 9, 2023

I think I wouldn't introduce an artificial apiVersion just to test this. Seems like a lot of effort

Especially as we still test the re-reconcile case and the only gap is that we don't verify that we would bump an apiVersion on an existing ref

@sbueringer
Copy link
Member Author

sbueringer commented Aug 9, 2023

Hm or wait, stupid idea. Could we just do the same as for the re-reconcile case, but instead of deleting the ownerRefs we just set them to some non-existent v1alpha1 version?

@killianmuldoon
Copy link
Contributor

Hm or wait, stupid idea. Could we just do the same as for the re-reconcile case, but instead of deleting the ownerRefs we just set them to some non-existent v1alpha1 version?

Not stupid at all - I think it should work perfectly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
5 participants