Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 Set the FieldOwner when updating objects #4428

Closed
wants to merge 2 commits into from

Conversation

bboreham
Copy link
Contributor

@bboreham bboreham commented Apr 4, 2021

What this PR does / why we need it:

This means the controller name will show up as the manager in the managedFields of each object.

This is what I was trying to achieve in #4257, but better since we get separate identities for all the different controllers that live in one binary.

In a couple of places I extracted an existing string out to a constant so I could re-use it.

I have not done Patch() yet as it will involve changing PatchHelper, so I thought I would get some reaction to this before continuing.

I didn't change clusterctl as the default is to use the binary name which works well in that case.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 4, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign neolit123 after the PR has been reviewed.
You can assign the PR to them by writing /assign @neolit123 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Hi @bboreham. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 4, 2021
@vincepri
Copy link
Member

vincepri commented Apr 5, 2021

Is there any side effects in setting the field owners? If we go down this path, and also update the patch helper code, we probably want to make sure we don't break any behaviors.

@vincepri
Copy link
Member

vincepri commented Apr 5, 2021

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 5, 2021
@bboreham
Copy link
Contributor Author

bboreham commented Apr 5, 2021

Good point - for an existing object, changing the manager from "manager" to "kubeadmconfig-controller", say, at the same time as changing the value should cause a conflict.

I haven't actually seen this happen in practice, but then most of my testing consists of deleting everything before a run.
Not sure if we are actually invoking "server-side apply" behaviour when we call Update(). (But presumably we are when we call Patch().)

Oh dear.

@sbueringer
Copy link
Member

@bboreham https://kubernetes.io/docs/reference/using-api/server-side-apply/#using-server-side-apply-in-a-controller

It is strongly recommended for controllers to always "force" conflicts, since they might not be able to resolve or act on these conflicts.

Maybe that helps. Although I don't recall how to force overwrite in this cases

@JoelSpeed
Copy link
Contributor

But presumably we are when we call Patch()

You have to explicitly tell the patch call the type of patch it is, so you should see the patch command specify client.Apply as the patch type if we are using server side apply, otherwise we aren't using it. I haven't checked but given the status of server side apply support right now, I suspect we haven't been using it in CAPI yet

@vincepri
Copy link
Member

vincepri commented Apr 6, 2021

We're definitely not using server side apply. Controller runtime has tiny bit of support for it, although it's not super straightforward, so we opted to keep doing what we've been doing so far.

This lets us put a meaningful name for 'manager' in the 'managedFields' of objects
@bboreham
Copy link
Contributor Author

bboreham commented Apr 8, 2021

I have re-done this using a client.Client wrapper, and this time covering Patch also.
Will try running it on a pre-existing cluster soon.

(If this technique gains acceptance, the wrapper should go upstream in controller-runtime)

So we get a meaningful name for 'manager' in the 'managedFields' of objects
@vincepri
Copy link
Member

vincepri commented Apr 9, 2021

/hold

I'm not sure if this is something we want to do for all calls. For example, this would set the field owner also for non owned objects. MachineHealthCheck for example patches Machines if they need remediation, that would set the owner for the conditions field to be MHC controller. The other side effect is that it seems we're forcing ownership regardless if the objects are CAPI or not.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 9, 2021
@bboreham
Copy link
Contributor Author

bboreham commented Apr 9, 2021

Each write gets an entry in managedFields regardless of whether this code is added.
Currently it just says "manager", so I think putting a more accurate name is an improvement.

The "force" semantics to change ownership seem to cover the case where the same field is updated by one controller then another.

And I don't get your point about "forcing ownership regardless if the objects are CAPI or not". What's an example of a non-CAPI object where you want a different owner?

@alvaroaleman
Copy link
Member

So without knowing much about the apis here, I would argue this is at least not worse than always putting the binary name in there :)

And there is no upstream concept for ownership of "shared fields" like a conditions slice, i.e. one has to own all of it or none of it?

@vincepri
Copy link
Member

vincepri commented Apr 9, 2021

And there is no upstream concept for ownership of "shared fields" like a conditions slice, i.e. one has to own all of it or none of it?

IIRC, each field owner is at the field level, so yes, it's going to own all of it.

Currently it just says "manager", so I think putting a more accurate name is an improvement.

Totally agree. @alvaroaleman I wonder if instead of having a shim in Cluster API, should we have something in Controller Runtime instead?

@alvaroaleman
Copy link
Member

Totally agree. @alvaroaleman I wonder if instead of having a shim in Cluster API, should we have something in Controller Runtime instead?

Yeah, I think that would make a lot of sense

@vincepri
Copy link
Member

@bboreham Would you be open in submitting the change to controller runtime?

@bboreham
Copy link
Contributor Author

Sure; any thoughts where it could go? In pkg/client, or in a new package?

@bboreham
Copy link
Contributor Author

kubernetes-sigs/controller-runtime#1474

(haven't updated this PR to use it yet)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 12, 2021
@k8s-ci-robot
Copy link
Contributor

@bboreham: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 19, 2021
@vincepri
Copy link
Member

Closing this given that there has been no updates w.r.t. setting field owners in our calls and its possible implications.

/close

@k8s-ci-robot
Copy link
Contributor

@vincepri: Closed this PR.

In response to this:

Closing this given that there has been no updates w.r.t. setting field owners in our calls and its possible implications.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants