-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FakeClient update on status subresource not returning error despite conflicting resource version since v0.15.0 #2362
Comments
Agree, sounds like a bug |
conflicts Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <iiiceoo@foxmail.com>
/assign |
conflicts The fake client of subresource is unable to correctly handle the case of resource version conflict when updating. The phenomenon is that it did not return a 409 status error. Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <iiiceoo@foxmail.com>
conflicts The fake client of subresource is unable to correctly handle the case of resource version conflict when updating. The phenomenon is that it did not return a 409 status error. Close kubernetes-sigs#2362 Signed-off-by: iiiceoo <iiiceoo@foxmail.com>
Hello, I encountered the exact same issue. While I'm waiting for a fix to be released, I rolled back my controller-runtime to use 0.14.6 (with all k8s go libs using 0.26.1) and it works for now... |
@sbueringer this did not work entirely for us. With the new changes we still had to explicitly add our crds as Statussubresurce to be able to use status updates. Generating the client and adding a resource with client.Create() does not work as before. |
Same issue using #2365 didn't fix the issue
|
/kind bug With the release of v0.16.0 did this not work? I see above that the #2365 wasn't successful but wanted to know if anything else in that release fixed this issue. |
Re-doing this test like this: func Test_FakeStatusUpdate(t *testing.T) {
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test",
Name: "test",
ResourceVersion: "1",
},
}
c := fake.NewClientBuilder().
WithRuntimeObjects(pod).WithStatusSubresource(pod).
Build()
pod.Status.Phase = "testphase"
pod.ResourceVersion = "2"
err := c.Status().Update(context.Background(), pod)
if err == nil {
t.Fatal("Expected conflict error, but got nil")
}
} now passes with the intended error as before. The snippet was updated here Doing a subsequent Get of the pod will show that the pod is not updated with no This test is running against controller-runtime v0.16.0 and go1.20 Does this help @nbam-e ? |
@troy0820 I think some of the other commenters here are referring to a different behavior change when using the fake client with CRDs. Since controller-runtime 0.15.0 you have to manually add the |
There is another thread around here describing the reason why this is now necessary and the inference of the subresource on resources that are not core are said to be difficult to assume. In controller-runtime the function However supplementing the builder to have this on instantiation with The issue I have been observing is if you are to create a resource, (You typically don't put a status on them just spec) and then try to update the status of the object you just created (without building it in the client with subresource), you'll get a I'm looking to see how to resolve that, but I fear that this can't be done because it can't see if it is supposed to have a subresource before you try to update the status. |
Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no>
Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no>
* Upgrade operator utils Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no> * regen manifests Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no> * fix test, see: kubernetes-sigs/controller-runtime#2362 (comment) Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no> --------- Signed-off-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no> Co-authored-by: David J. M. Karlsen <david.johan.macdonald.karlsen@dnb.no>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <mko@redhat.com>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <mko@redhat.com>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <mko@redhat.com>
With the upgrade to controller-runtime v0.15 we need to supplement the fake builder to contain instantiation `WithStatusSubresource` in order to be able to update status of the object that we have created. In previous versions of controller-runtime this was not necessary because the semantics was ignored but in v0.15 the fake client cannot infer the status subresource without its explicit instantiation. Ref.: kubernetes-sigs/controller-runtime#2362 Signed-off-by: Mat Kowalski <mko@redhat.com>
I've came up with a hacky workaround that solved the gap for me |
Is there another workaround for this? my test is trying to check a status change on a reconcile, meaning i cannot provide Is there a workaround? or another client that i can use? In older versions (0.13.x) it worked as expected why did that flow changed? |
The original issue was fixed and the latter discussion seems to evolve around the question of how to configure a CRD to have a status subresource in the fake client. The tl,dr on how to configure the fakeclient to provide a status subresource for a given resource is to do the following:
If this is needed or not can not be automatically inferred, refer to #2386 (comment) as for why. As the original issue was fixed, I am going to close this. Please limit discussions on this issue to the bug it was created for and that was since fixed. |
After upgrading to controller-runtime 0.15.0 one of our unit tests started failing.
The tested controller performs a
client.Status().Update(...)
on Pod resources. The now failing unit test aims to test the controller behavior in case of a resource conflict (i.e. The fake client has a pod object, the controller performs a status update on the same pod object, but with a different resource version -> expected behavior: Conflict error, actual behavior since upgrading to 0.15.0: no error).This can be reproduced with the following testcase: (controller-runtime 0.15.0, k8s api 0.27.2)
I think it happens because the result of SetResourceVersion is immediately overwritten again by fromMapStringAny.
There appears to be a unit test for this case, but it doesn't fail because it uses
client.Update(...)
instead ofclient.Status().Update(...)
. https://github.com/kubernetes-sigs/controller-runtime/blob/30eae58f1b984c1b8139dd9b9f68dd2d530ed429/pkg/client/fake/client_test.go#LL1441C2-L1441C2Related PR: #2259
The text was updated successfully, but these errors were encountered: