-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚠️ Implement Patch method #235
Conversation
/assign @DirectXMan12 |
@adracus: GitHub didn't allow me to assign the following users: grantr. Note that only kubernetes-sigs members and repo collaborators can be assigned. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/hold Unfortunately, server-side apply hasn't landed yet, but I still feel like this is a really awkward structure to expose to users, especially as one that we have to support into perpetuity. |
/priority important-soon had a few more people ask about this. I think we may need to go forward with some patch-like thing. |
@DirectXMan12 do you have more input on this? I'd be up to change this PR to suit these needs. |
The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
pkg/client/client.go
Outdated
@@ -121,6 +123,15 @@ func (c *client) Delete(ctx context.Context, obj runtime.Object, opts ...DeleteO | |||
return c.typedClient.Delete(ctx, obj, opts...) | |||
} | |||
|
|||
// Patch implements client.Client | |||
func (c *client) Patch(ctx context.Context, pt types.PatchType, data []byte, obj runtime.Object, subresources ...string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to make this a splat of PatchOptions like we have for ListOptions and DeleteOptions to allow for future expansion. We can then create an option method for subresource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
our past discussed subresource plan has been a .Subresource
method, but I'm open for debate on that. Otherwise agree that using a splat for subresource isn't the right way to go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the subresources
splat and exchanged it for PatchOptionFunc
. However, since there was no final clarity yet how subresources are handled, the PatchOptions
are empty for now.
When we finally agree on how to do subresource handling in the future, we can incorporate it without changing the API.
Ok, so, to move forward:
type Patch interface {
Type() types.PatchType
Data(obj runtime.Object) []byte
} This way, we can provide nicer abstractions on top of the patch method (e.g. passing Also (this is a bit more just preference), I feel it looks more natural to have object then patch data for the arguments order. /hold cancel Sorry for all the foot-dragging on this :-( |
I implemented the spec almost as you described it: To also show that this approach is viable, I went ahead and implemented a |
This is a breaking change since it modifies an interface, so I've edited the title.
I put the runtime.Object part in there so that we could cleanly have server-side apply work nicely: cl.Patch(ctx, &desiredObject, client.Apply) It also means that you can do stuff like cl.Patch(ctx, &desiredObject, client.MergedFromOld(&oldObject)) Then, type applyPatcher struct {}
func (p applyPatcher) Type() types.PatchType { return types.ApplyPatchType /* or whatever it actually is */ }
func (p applyPatcher) Data(obj runtime.Object) ([]byte, error) {
return runtime.Encode(unstructured.UnstructuredJSONScheme, obj) /* maybe should actually be an encoder and not the unstructured scheme, but doesn't really matter too much */
} |
Alright I tried to reflect these changes, PTAL |
Looks good! I'll follow up with some more patch types (json, server-side apply). /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: adracus, DirectXMan12 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
pushed some fixes + options support, so we can get this merged. |
I'll do the following in a follow-up PR:
|
This supports passing update options to patch options (e.g. dry-run options).
d17c483
to
cd1f7a6
Compare
oof, that was a weird flake |
/lgtm |
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
…achine deletion The node draining code itself is imported from github.com/openshift/kubernetes-drain. At the same time it's currently impossible to use the controller-runtime client for node draining due to missing Patch operation (kubernetes-sigs/controller-runtime#235). Thus, the machine controller needs to initialize kubeclient as well in order to implement the node draining logic. Once the Patch operation is implemented, the draining logic can be updated to replace kube client with controller runtime client. Also, initialize event recorder to generate node draining event.
Add SECURITY_CONTACTS file
This adds the
Patch
method to theClient
interface as well as to the two implementations (typedClient
,unstructuredClient
).