-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 ClusterResourceSet: continue applying when apply for a single cluster failed #8611
Conversation
/assign @fabriziopandini |
/assign @killianmuldoon |
/hold |
Looks like we don't need this fix once this CR fix is merged: kubernetes-sigs/controller-runtime#2303 Basically with the old CR version ConfigMap/Secret were not cached, with the new version they were cached. So we now were hitting stale cache issues with ConfigMap/Secret. |
/close |
@sbueringer: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Should we fix this anyway? I think this PR was all upside / improvements in CRS. |
Do you think we should requeue if the object actually does not exist? |
/reopen |
@sbueringer: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The part that we try all clusters before we return the errors is definitely an improvement we should merge |
exp/addons/internal/controllers/clusterresourceset_controller.go
Outdated
Show resolved
Hide resolved
exp/addons/internal/controllers/clusterresourceset_controller.go
Outdated
Show resolved
Hide resolved
I'll update the PR as discussed. Independent of that I want to merge this 1-2 weeks after the CR bump to ensure the CR bump by itself is not introducing a regression /hold |
@fabriziopandini Reframed the PR (explanation here: #8611 (comment)). I think now we can also merge this PR independent of the CR bump. I don't see a relevant overlap anymore /assign @fabriziopandini |
…failed Signed-off-by: Stefan Büringer buringerst@vmware.com
/test pull-cluster-api-e2e-full-main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for this @sbueringer
/lgtm
LGTM label has been added. Git tree hash: d2cb933b109bc76bbb0a8e7d10cdc9fa914dc4e4
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm ok to hold as you proposed |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini, vincepri The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@fabriziopandini I would merge now directly because of #8611 (comment) . Basically this PR doesn't change the reqeue behavior for a CRS with missing resources anymore. /hold cancel |
/area clusterresourceset |
Signed-off-by: Stefan Büringer buringerst@vmware.com
What this PR does / why we need it:
Previously as soon as apply for a single cluster failed the Reconcile func returned and other clusters were not reconciled.
With this PR we try to apply for all clusters and then requeue if one of them returned an
ErrClusterLocked
error.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #