Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"non-exact field matches are not supported by the cache" when using fieldSelector with not exact fields #612

Closed
sarjeet2013 opened this issue Sep 24, 2019 · 13 comments · Fixed by #2512
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@sarjeet2013
Copy link

I am getting the following error when trying to get list of pods with their state filetered with fieldSelector.

"error":"non-exact field matches are not supported by the cache"

Here is the sample code:

fieldSelector, err := fields.ParseSelector("spec.nodeName=" + nodeName + ",status.phase!=" + string(corev1.PodSucceeded) + ",status.phase!=" + string(corev1.PodFailed))
activePodsList := &corev1.PodList{}
listOptions := &client.ListOptions{FieldSelector: fieldSelector}
if err := r.List(context.TODO(), listOptions, activePodsList); err != nil {
  return reconcile.Result{}, err
}

This look like a current limitation and not supported. See https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/cache/internal/cache_reader.go#L99

@DirectXMan12
Copy link
Contributor

it's unclear to me how we'd support this w/o becoming fairly complicated.

/kind feature
/priority backlog

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Sep 30, 2019
@lawrencegripper
Copy link

In case anyone else lands here from KubeBuilder and wants an workaround I was able to create a client instance without caching using the mgr in the autogen'd main.go like this:

directClient, err := client.New(mgr.GetConfig(), client.Options{Scheme: mgr.GetScheme(), Mapper: mgr.GetRESTMapper()})
if err != nil {
	panic(err)
}

@shawn-hurley
Copy link

Another option is to use the mgr.GetReader() which will allow you to hit the API server directly with a reader interface.

DirectXMan12 pushed a commit that referenced this issue Jan 31, 2020
📖 Elaborate on the design pricipals of KB
@vincepri
Copy link
Member

/kind design
/priority awaiting-more-evidence

@k8s-ci-robot k8s-ci-robot added kind/design Categorizes issue or PR as related to design. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Feb 21, 2020
@vincepri vincepri added this to the Next milestone Feb 21, 2020
@maplain
Copy link

maplain commented Feb 29, 2020

figured out another approach:

  1. use client.MatchingFields;
  2. customize the IndexField to make it return a value that is parameterized by multiple fields

eg:

m := client.MatchingFields{
  key: getValue(obj)
}

mgr.GetFieldIndexer().IndexField(obj, key, func(xx) []string{
   return []string{getValue(obj)}
})

func getValue(obj) string {
   return obj.Spec.A + "," + obj.Spec.B + "," + obj.Spec.C
}

Client.List(xx, xxx, m)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 28, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

akutz pushed a commit to vmware-tanzu/vm-operator that referenced this issue Jun 3, 2021
Walk the returned list to verify the binding matches the expect
type and name. In order to use the MatchingFields with the caching
Client, an IndexField first needs to be created. However, we need to
match multiple fields, but controller-runtime only supports one field
easily:
    kubernetes-sigs/controller-runtime#612

The expected number of bindings per namespace isn't very many, so it
is unlikely there is much to gain with an index.

The fake client doesn't support MatchingFields, and we didn't
have a negative test so this otherwise went unnoticed until the
FSS was flipped.
akutz pushed a commit to vmware-tanzu/vm-operator that referenced this issue Jun 3, 2021
Walk the returned list to verify the binding matches the expect
type and name. In order to use the MatchingFields with the caching
Client, an IndexField first needs to be created. However, we need to
match multiple fields, but controller-runtime only supports one field
easily:
    kubernetes-sigs/controller-runtime#612

The expected number of bindings per namespace isn't very many, so it
is unlikely there is much to gain with an index.

The fake client doesn't support MatchingFields, and we didn't
have a negative test so this otherwise went unnoticed until the
FSS was flipped.
@kscharm
Copy link

kscharm commented Sep 1, 2021

I am getting the following error when trying to get list of pods with their state filetered with fieldSelector.

"error":"non-exact field matches are not supported by the cache"

Here is the sample code:

fieldSelector, err := fields.ParseSelector("spec.nodeName=" + nodeName + ",status.phase!=" + string(corev1.PodSucceeded) + ",status.phase!=" + string(corev1.PodFailed))
activePodsList := &corev1.PodList{}
listOptions := &client.ListOptions{FieldSelector: fieldSelector}
if err := r.List(context.TODO(), listOptions, activePodsList); err != nil {
  return reconcile.Result{}, err
}

This look like a current limitation and not supported. See https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/cache/internal/cache_reader.go#L99

@sarjeet2013 I'm trying to do the same thing you are here (same field selector). Were you able to find a workaround? I have tried the suggestions in the comments, but I think I am going to have to directly hit the API to get this to work. It seems that fields selectors with multiple fields simply do not work. I have added the proper field indexers to my manager:

nodeNameIndexFunc := func(obj client.Object) []string {
	return []string{obj.(*v1.Pod).Spec.NodeName}
}

if err = mgr.GetFieldIndexer().IndexField(context.Background(), &v1.Pod{}, "spec.nodeName", nodeNameIndexFunc); err != nil {
	glog.Fatalf("unable to set up field indexer for spec.nodeName: %v", err)
}

phaseIndexFunc := func(obj client.Object) []string {
	return []string{string(obj.(*v1.Pod).Status.Phase)}
}

if err = mgr.GetFieldIndexer().IndexField(context.Background(), &v1.Pod{}, "status.phase", phaseIndexFunc); err != nil {
	glog.Fatalf("unable to set up field indexer for status.phase: %v", err)
}

And I still face the same error as you: non-exact field matches are not supported by the cache

@kscharm
Copy link

kscharm commented Sep 1, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@kscharm: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@JohnNiang
Copy link

JohnNiang commented Sep 8, 2021

Exactly caused by Line#192:

func requiresExactMatch(sel fields.Selector) (field, val string, required bool) {
reqs := sel.Requirements()
if len(reqs) != 1 {
return "", "", false
}
req := reqs[0]
if req.Operator != selection.Equals && req.Operator != selection.DoubleEquals {
return "", "", false
}
return req.Field, req.Value, true
}

And a TODO is here Line#117:

func (c *CacheReader) List(_ context.Context, out client.ObjectList, opts ...client.ListOption) error {
var objs []interface{}
var err error
listOpts := client.ListOptions{}
listOpts.ApplyOptions(opts)
switch {
case listOpts.FieldSelector != nil:
// TODO(directxman12): support more complicated field selectors by
// combining multiple indices, GetIndexers, etc
field, val, requiresExact := requiresExactMatch(listOpts.FieldSelector)
if !requiresExact {
return fmt.Errorf("non-exact field matches are not supported by the cache")
}
// list all objects by the field selector. If this is namespaced and we have one, ask for the
// namespaced index key. Otherwise, ask for the non-namespaced variant by using the fake "all namespaces"
// namespace.
objs, err = c.indexer.ByIndex(FieldIndexName(field), KeyToNamespacedKey(listOpts.Namespace, val))

sreyasn pushed a commit to sreyasn/vm-operator that referenced this issue Oct 8, 2021
Walk the returned list to verify the binding matches the expect
type and name. In order to use the MatchingFields with the caching
Client, an IndexField first needs to be created. However, we need to
match multiple fields, but controller-runtime only supports one field
easily:
    kubernetes-sigs/controller-runtime#612

The expected number of bindings per namespace isn't very many, so it
is unlikely there is much to gain with an index.

The fake client doesn't support MatchingFields, and we didn't
have a negative test so this otherwise went unnoticed until the
FSS was flipped.
akutz pushed a commit to vmware-tanzu/vm-operator that referenced this issue Sep 21, 2022
Walk the returned list to verify the binding matches the expect
type and name. In order to use the MatchingFields with the caching
Client, an IndexField first needs to be created. However, we need to
match multiple fields, but controller-runtime only supports one field
easily:
    kubernetes-sigs/controller-runtime#612

The expected number of bindings per namespace isn't very many, so it
is unlikely there is much to gain with an index.

The fake client doesn't support MatchingFields, and we didn't
have a negative test so this otherwise went unnoticed until the
FSS was flipped.
akutz pushed a commit to vmware-tanzu/vm-operator that referenced this issue Dec 1, 2022
Walk the returned list to verify the binding matches the expect
type and name. In order to use the MatchingFields with the caching
Client, an IndexField first needs to be created. However, we need to
match multiple fields, but controller-runtime only supports one field
easily:
    kubernetes-sigs/controller-runtime#612

The expected number of bindings per namespace isn't very many, so it
is unlikely there is much to gain with an index.

The fake client doesn't support MatchingFields, and we didn't
have a negative test so this otherwise went unnoticed until the
FSS was flipped.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
10 participants