Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support server-side apply (client.Apply) in fake Client #2341

Open
nathanperkins opened this issue May 23, 2023 · 23 comments
Open

Support server-side apply (client.Apply) in fake Client #2341

nathanperkins opened this issue May 23, 2023 · 23 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@nathanperkins
Copy link

nathanperkins commented May 23, 2023

Recently, we've been writing extensive unit test coverage of our controllers using fake.Client but a few of our controllers use client.Apply, which is not supported by fake.Client.

We can't migrate these controllers from client.Apply to client.Merge because with upgrades, any CRs managed by the previous version will have owned fields and the API server will reject the request (my understanding, is this true?) based on the SSA docs it seems the server would only reject requests based on ownership when using SSA, so using update would work.

Envtest takes ~5-15s to set up for each test case, meaning a test suite which would take 0.05s with fake.Client, takes 5 minutes with envtest. In many cases, we can isolate tests to namespaces and reuse the same client, but not always. We prefer our unit tests to be as isolated as possible.

Would it be possible to support server-side apply / client.Apply / SSA in the fake client?

@alvaroaleman
Copy link
Member

The reason this isn't currently supported is because upstream client-go doesn't support this: kubernetes/kubernetes#115598

We could build something downstream I suppose and the create case is going to be simple. I think the Update case is going to be pretty complicated though (if a field that is currently present but not in the submitted applyconfig, should we keep it or not is a non-trivial question to answer), so if possible I'd prefer to wait for upstream.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

@vincepri
Copy link
Member

Envtest takes ~5-15s to set up for each test case

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

@nathanperkins
Copy link
Author

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

I need to review our cases. We want to ensure that our test cases are isolated. Most of the time you can isolate them into unique namespaces but some of the controllers have specific requirements around that. I think we may be able to get around it by specifying which namespace to use as a field on the reconciler struct.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

Thank you for the response! I will look into this :)

The reason this isn't currently supported is because upstream client-go doesn't support this: kubernetes/kubernetes#115598

Sounds reasonable to me. I mostly wanted to create an issue so that we have something to represent that it would make our lives a bit easier. If the work to benefit ratio doesn't make sense, that's fair.

Given that we can run envtest and isolate cases in namespaces, that is probably the way to go. It's a small bummer that we have to run our unit tests alongside an external dependency which takes some time to start up. We're going to reorganize things a bit and improve our scripts to make this easier for our developers.

@nathanperkins
Copy link
Author

nathanperkins commented May 24, 2023

Hm, envtest is usually setup once per test package, is there a reason why envtest in this case once for every test case?

Found a case where it is a bit of a drag to use envtest. Any test which involves cluster scoped objects cannot be fully isolated in namespaces, leading to some issues:

  • Requires the test case to clean up after itself, resulting in more complex code and longer test execution times.
  • Opportunity for test pollution to cause false positive and/or false negatives if clean up isn't fully accounted for.
  • Cannot run concurrently.

@alvaroaleman
Copy link
Member

@nathanperkins could we keep the discussion around if and how and when to use envtest separate? We are aware that this is lacking right now, but this is non-trivial which is the reason upstream hasn't done it. This essentially requires to have the serverside SSA logic in the fake client.

@nathanperkins
Copy link
Author

@nathanperkins could we keep the discussion around if and how and when to use envtest separate?

Sure, it's totally understood this is not going to be resolved anytime soon.

I think that people searching for this issue will find it useful if there is clarity on what they can do in the meantime. The discussion on when / how to use envtest effectively instead of the fake client seems useful to that end. Maybe there is a doc or blog post that could be linked?

@vincepri
Copy link
Member

Found a case where it is a bit of a drag to use envtest. Any test which involves cluster scoped objects cannot be fully isolated in namespaces

Do you have an example handy to show? Just to wrap my head around a bit more 😄

A few thoughts:

  • We probably need better documentation / examples on how to use and re-use envtest appropriately.
  • Ideally envtest is so easy to integrate and use that very few folks rely on fake client (which imo is generally lacking in terms of full feature support, and gives a false sense of safety in some cases).
  • Depending on how the reconcilers are structured, maybe there is a way that a filtered cache/client is passed into each reconciler's test case so it only has a partial view of the objects?
    // DefaultLabelSelector will be used as a label selectors for all object types
    // unless they have a more specific selector set in ByObject.
    DefaultLabelSelector labels.Selector
    // DefaultFieldSelector will be used as a field selectors for all object types
    // unless they have a more specific selector set in ByObject.
    DefaultFieldSelector fields.Selector
    • Requires some sort of predetermined convention, in either labels, or annotations, or naming convention.
    • Cache filters would only apply to list/get calls.

@nathanperkins
Copy link
Author

nathanperkins commented May 26, 2023

Do you have an example handy to show? Just to wrap my head around a bit more 😄

I couldn't show the code without going through a bunch of approvals. I can tell you it's a controller which reconciles on corev1.Service and looks at corev1.Node status to create some internals CRs for network configuration. Our test is using envtest and we can't fully isolate the nodes between test cases without cleaning up the nodes.

Depending on how the reconcilers are structured, maybe there is a way that a filtered cache/client is passed into each reconciler's test case so it only has a partial view of the objects?

Great idea! I'll share this with my teammate and see if it works for our case.

We probably need better documentation / examples on how to use and re-use envtest appropriately.

I agree, sharing some of these patterns would really help. Last I looked at the kubebuilder docs, they have the example which uses ginkgo and gomega and relies on the manager to run reconciliation. I've been finding that it's easier to write exhaustive and accurate test coverage using more traditional table driven tests which call reconcile directly. We still write integration tests with ginkgo and gomega which use the manager, but less exhaustive and focusing more on ensuring the event handlers work correctly.

I'd love to see more discussion in the community about this, whether in docs or blog posts :)

@nathanperkins
Copy link
Author

@vincepri, I'm moving discussion of using envtest with isolated unittest cases to #2358

@sbueringer
Copy link
Member

sbueringer commented May 31, 2023

@nathanperkins we have some general guidance here: https://cluster-api.sigs.k8s.io/developer/testing.html (not sure in which issue we want to dig deeper into pro/con of fake client vs envtest)

@jakobmoellerdev
Copy link

The reason this isn't currently supported is because upstream client-go doesn't support this: kubernetes/kubernetes#115598

We could build something downstream I suppose and the create case is going to be simple. I think the Update case is going to be pretty complicated though (if a field that is currently present but not in the submitted applyconfig, should we keep it or not is a non-trivial question to answer), so if possible I'd prefer to wait for upstream.

In the meantime when using todays 0.15 release of controller-runtime, you can use the interceptor to set up a createOrUpdate logic in Patch when an applypatch is submitted that works for your case.

For anyone looking for a workaround until the fake Client supports client.Apply:

import (
        "k8s.io/apimachinery/pkg/types"
	"sigs.k8s.io/controller-runtime/pkg/client/fake"
        "sigs.k8s.io/controller-runtime/pkg/client"
)

fake.NewClientBuilder().WithScheme(scheme).WithObjects(objs...).WithInterceptorFuncs(interceptor.Funcs{Patch: func(ctx context.Context, clnt client.WithWatch, obj client.Object, patch client.Patch, opts ...client.PatchOption) error {
		// Apply patches are supposed to upsert, but fake client fails if the object doesn't exist,
		// if an apply patch occurs for an object that doesn't yet exist, create it.
		if patch.Type() != types.ApplyPatchType {
			return clnt.Patch(ctx, obj, patch, opts...)
		}
		check, ok := obj.DeepCopyObject().(client.Object)
		if !ok {
			return errors.New("could not check for object in fake client")
		}
		if err := clnt.Get(ctx, client.ObjectKeyFromObject(obj), check); k8serror.IsNotFound(err) {
			if err := clnt.Create(ctx, check); err != nil {
				return fmt.Errorf("could not inject object creation for fake: %w", err)
			}
		}
		return clnt.Patch(ctx, obj, patch, opts...)
	}}).Build()

@troy0820
Copy link
Member

/kind support feature
@nathanperkins can we close this issue? I see you moved it to a different issue

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. kind/feature Categorizes issue or PR as related to a new feature. labels Aug 28, 2023
@alvaroaleman
Copy link
Member

alvaroaleman commented Aug 28, 2023

I think this issue is valid and we should keep it open. Effectively it is tracking the upstream issue kubernetes/kubernetes#115598, after that got resolved, we will get this as well.

Something like what @jakobmoellerdev suggested is an approximation only in that it makes SSA effectively a CreateOrPatch in the fakeclient, which is not the same as the field ownership tracking that is done in the actual SSA. So what happens in such a test might not be representative of what happens in reality.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 27, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pmalek
Copy link

pmalek commented Mar 27, 2024

/remove-lifecycle rotten
/reopen

@k8s-ci-robot
Copy link
Contributor

@pmalek: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/remove-lifecycle rotten
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 27, 2024
@sbueringer
Copy link
Member

/reopen

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Mar 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2024
@alvaroaleman
Copy link
Member

Upstream fake clients started to support SSA in kubernetes/kubernetes#125560, we should follow suit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

9 participants