Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CAPI controllers should add finalizers into templates if the templates are necessary for updates. #6588

Open
erkanerol opened this issue Jun 2, 2022 · 12 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@erkanerol
Copy link

erkanerol commented Jun 2, 2022

What steps did you take and what happened:

Changes in KubeadmConfigTemplate don't trigger any rollout #4910.

To be able to update the config, you need to create a new KubeadmConfigTemplate and update the reference in MachineDeployment. If you delete the existing KubeadmConfigTemplate, capi controller cannot complete the update.

What did you expect to happen:

capi-controller should add finalizers into templates if they are necessary to roll out some changes.

Anything else you would like to add:

Clients are responsible for declaring the desired state and I think storing old templates is not one of the clients' responsibilities. If controllers need to have a CR for any functionality, they should put their finalizers into resources to prevent any problematic deletion.

For the clients designed in a declarative way (e.g. gitops + flux, helm), it is tough to complete an update in one move. In the current design, we have to do some hacks like adding our own finalizers into templates and cleaning with our custom operator or doing upgrades in two moves. Both of them are so problematic.

Environment:

  • Cluster-api version: cluster-api-controller:v1.1.3
  • Kubernetes version: (use kubectl version): v1.22.9
  • OS (e.g. from /etc/os-release):

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 2, 2022
@erkanerol erkanerol changed the title CAPI controllers should add finalizers into templates if the templates are necessary for upgrades. CAPI controllers should add finalizers into templates if the templates are necessary for updates. Jun 2, 2022
@fabriziopandini
Copy link
Member

This has been already addressed with ClusterClass and managed topologies

@erkanerol
Copy link
Author

This has been already addressed with ClusterClass and managed topologies

Isn't it still a valid use case to not use ClusterClass?

@fabriziopandini fabriziopandini added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jul 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2022
@erkanerol
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2022
@fabriziopandini
Copy link
Member

This has been already addressed with ClusterClass and managed topologies

Isn't it still a valid use case to not use ClusterClass?

What I was trying to explain (sorry if I did not provide context) is that ClusterClass and managed topologies are already an answer to this use case because when using this feature the system takes care of managing template rotation for the users/the tool triggering the change.

Given that, I don't think we need to implement another solution to the same problem (and ClusterClass and managed topologies solve much more than this problem).
But happy to reconsider if a solution that works both with legacy and classy clusters comes up.

@erkanerol
Copy link
Author

@fabriziopandini From my point of view, any controller should add a finalizer into a custom resource if the controller requires that custom resource to proceed when the custom resource is deleted. This rule must be followed by each controller in the dependency hierarchy/tree regardless of use cases.

@fabriziopandini
Copy link
Member

/triage accepted
I will start by adding a warning about not deleting old templates in https://cluster-api.sigs.k8s.io/tasks/updating-machine-templates.html

I'm not 100% sold on the idea of forcing finalizers on objects managed by the users (while instead we are already doing something similar when using managed topologies)
Let's wait for some additional opinion - or bring it up during office hours - before acting on this part.

/help

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/triage accepted
I will start by adding a warning about not deleting old templates in https://cluster-api.sigs.k8s.io/tasks/updating-machine-templates.html

I'm not 100% sold on the idea of forcing finalizers on objects managed by the users (while instead we are already doing something similar when using managed topologies)
Let's wait for some additional opinion - or bring it up during office hours - before acting on this part.

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Nov 30, 2022
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jan 19, 2024
@name212
Copy link

name212 commented Feb 3, 2024

We have a similar problem with infrastructure MachineTemplate. We use helm to deploy machine template. In order to update MachineDeployment according to instructions we add a checksum to the MachineTemplate name, but when changing the name helm removes the previous MachineTemplate and capi-controller skip processing MachineSet with "Not found" error and machines keep in the cluster. The only way for us is to use helm.sh/resource-policy': keep annotation and write a third-party controller delete orphan MachineTemplate , which is not very convenient. We think it is possible to skip remote MachineTemplate and not block MachineDeployment and MachineSets from rolling over.

Another option. We are now migrating from Machine controller manager to ClusterApi.
In the MachineDeployment we add an annotation that says that the MachineClass has been changed and we need to update the MachineDeployment. It might be worth following the same path.

@fabriziopandini
Copy link
Member

/priority backlog

@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

5 participants