Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CacheReader: DeepCopyObject on every Get/List method calls #1235

Closed
lobkovilya opened this issue Nov 2, 2020 · 7 comments · Fixed by #1274
Closed

CacheReader: DeepCopyObject on every Get/List method calls #1235

lobkovilya opened this issue Nov 2, 2020 · 7 comments · Fixed by #1274
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@lobkovilya
Copy link

We are actively using CacheReader in our project and we faced a memory issue caused by DeepCopyObject inside Get and List methods.

CacheReader's List calls Indexer's List and then calls DeepCopyObject for every object:

func (c *CacheReader) List(_ context.Context, out runtime.Object, opts ...client.ListOption) error {
...
    objs = c.indexer.List()
    ...
    runtimeObjs := make([]runtime.Object, 0, len(objs))
    for _, item := range objs {
    ...
        outObj := obj.DeepCopyObject()
        outObj.GetObjectKind().SetGroupVersionKind(c.groupVersionKind)
        runtimeObjs = append(runtimeObjs, outObj)
    }
...
}

But Indexer implemented by cache has the following doc-comment for method List:

// List returns a list of all the items.
// List is completely threadsafe as long as you treat all items as immutable.
func (c *cache) List() []interface{} {
	return c.cacheStorage.List()
}

And that's exactly our scenario. We call CacheReader.List from different goroutines and everywhere treat the result items as immutable.

Is there any way to avoid DeepCopyObject and have this behavior parametrized?

@vincepri
Copy link
Member

vincepri commented Nov 2, 2020

Not sure if there is a way to avoid it right now, if we don't deep copy the object users might modify the underlying cache object, which is incorrect and can cause headaches

@lobkovilya
Copy link
Author

What do you think about extracting deepCopy into a separate object that also implements Reader interfaces, delegates all requests to regular CacheReader, and copy return value? It could be called CopyCacheReader:

var _ client.Reader = &CopyCacheReader{}

type CopyCacheReader struct {
	CacheReader
}

func (c *CopyCacheReader) Get(ctx context.Context, key client.ObjectKey, out runtime.Object) error {
	copyOut := ...
	if err := c.CacheReader.Get(ctx, key, copyOut); err != nil {
		return err
	}
	copyOut = copyOut.(runtime.Object).DeepCopyObject()
	// set 'copyOut' value to 'out'
	return nil
}

func (c *CopyCacheReader) List(ctx context.Context, out runtime.Object, opts ...client.ListOption) error {
	copyOut := ...
	if err := c.CacheReader.List(ctx, copyOut, opts...); err != nil {
		return err
	}
	// iterate over 'copyOut' and call DeepCopyObject for every item, set result to 'out'
	return nil
}

And then when calling NewManager default function for NewCache would be with CopyCacheReader wrapper (so we preserve the behavior), but in our project, we just replace it with custom NewCache without copying.

Maybe you can recommend some other way to approach this problem. Because right now we have to create a fork and it's really difficult to maintain in a long run.

@alvaroaleman
Copy link
Member

@lobkovilya not using the DeepCopy there allows for very subtile bugs when ppl inadvertedly manipulate their cache and would be a compatibility breach.

If you have memory issues I recommend to set up indexes: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#FieldIndexer

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 9, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
5 participants