Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pluralization discrepancy in UnsafeGuessKindToResource still exists #1082

Open
Limorerez opened this issue Apr 7, 2022 · 16 comments
Open

Pluralization discrepancy in UnsafeGuessKindToResource still exists #1082

Limorerez opened this issue Apr 7, 2022 · 16 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@Limorerez
Copy link

Limorerez commented Apr 7, 2022

Expected:

gateway-> gateways
Got:
gateway-> gatewaies

Is there any way to affect this mistaken guessing?

@AlexanderYastrebov
Copy link

Is there any way to affect this mistaken guessing?

A workaround is to pass empty list of objects

func NewSimpleClientset(objects ...runtime.Object) *Clientset {
o := testing.NewObjectTracker(scheme, codecs.UniversalDecoder())
for _, obj := range objects {
if err := o.Add(obj); err != nil {
panic(err)
}
}

and manually use Create() on the tracker instead of Add() as suggested here
// NOTE: UnsafeGuessKindToResource is a heuristic and default match. The
// actual registration in apiserver can specify arbitrary route for a
// gvk. If a test uses such objects, it cannot preset the tracker with
// objects via Add(). Instead, it should trigger the Create() function
// of the tracker, where an arbitrary gvr can be specified.

@pmalek
Copy link
Contributor

pmalek commented Aug 8, 2022

I've also stumbled across this when using

		dynClient := dyn_fake.NewSimpleDynamicClientWithCustomListKinds(scheme.Scheme,
			map[schema.GroupVersionResource]string{
				{
					Group:    "gateway.networking.k8s.io",
					Version:  "v1beta1",
					Resource: "gateways",
				}: "GatewayList",
			},
			...
		)

and then listing a gateway (via unstructured client) yields

panic: coding error: you must register resource to list kind for every resource you're going to LIST when creating the client.  See NewSimpleDynamicClientWithCustomListKinds or register the list into the scheme: gateway.networking.k8s.io/v1beta1, Resource=gateways out of map[/, Resource=:List /v1, Resource=apigroups:APIGroupList /v1, Resource=apiresources:APIResourceList /v1, Resource=componentstatuses:ComponentStatusList /v1, Resource=configmaps:ConfigMapList /v1, Resource=endpoints:EndpointsList /v1, Resource=events:EventList /v1, Resource=limitranges:LimitRangeList /v1, Resource=namespaces:NamespaceList /v1, Resource=nodes:NodeList /v1, Resource=persistentvolumeclaims:PersistentVolumeClaimList /v1, Resource=persistentvolumes:PersistentVolumeList /v1, Resource=pods:PodList /v1, Resource=podtemplates:PodTemplateList /v1, Resource=replicationcontrollers:ReplicationControllerList /v1, Resource=resourcequotas:ResourceQuotaList /v1, Resource=secrets:SecretList /v1, Resource=serviceaccounts:ServiceAccountList /v1, Resource=services:ServiceList admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations:MutatingWebhookConfigurationList admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations:ValidatingWebhookConfigurationList admissionregistration.k8s.io/v1beta1, Resource=mutatingwebhookconfigurations:MutatingWebhookConfigurationList admissionregistration.k8s.io/v1beta1, Resource=validatingwebhookconfigurations:ValidatingWebhookConfigurationList apps/v1, Resource=controllerrevisions:ControllerRevisionList apps/v1, Resource=daemonsets:DaemonSetList apps/v1, Resource=deployments:DeploymentList apps/v1, Resource=replicasets:ReplicaSetList apps/v1, Resource=statefulsets:StatefulSetList apps/v1beta1, Resource=controllerrevisions:ControllerRevisionList apps/v1beta1, Resource=deployments:DeploymentList apps/v1beta1, Resource=statefulsets:StatefulSetList apps/v1beta2, Resource=controllerrevisions:ControllerRevisionList apps/v1beta2, Resource=daemonsets:DaemonSetList apps/v1beta2, Resource=deployments:DeploymentList apps/v1beta2, Resource=replicasets:ReplicaSetList apps/v1beta2, Resource=statefulsets:StatefulSetList autoscaling/v1, Resource=horizontalpodautoscalers:HorizontalPodAutoscalerList autoscaling/v2, Resource=horizontalpodautoscalers:HorizontalPodAutoscalerList autoscaling/v2beta1, Resource=horizontalpodautoscalers:HorizontalPodAutoscalerList autoscaling/v2beta2, Resource=horizontalpodautoscalers:HorizontalPodAutoscalerList batch/v1, Resource=cronjobs:CronJobList batch/v1, Resource=jobs:JobList batch/v1beta1, Resource=cronjobs:CronJobList certificates.k8s.io/v1, Resource=certificatesigningrequests:CertificateSigningRequestList certificates.k8s.io/v1beta1, Resource=certificatesigningrequests:CertificateSigningRequestList coordination.k8s.io/v1, Resource=leases:LeaseList coordination.k8s.io/v1beta1, Resource=leases:LeaseList discovery.k8s.io/v1, Resource=endpointslices:EndpointSliceList discovery.k8s.io/v1beta1, Resource=endpointslices:EndpointSliceList events.k8s.io/v1, Resource=events:EventList events.k8s.io/v1beta1, Resource=events:EventList extensions/v1beta1, Resource=daemonsets:DaemonSetList extensions/v1beta1, Resource=deployments:DeploymentList extensions/v1beta1, Resource=ingresses:IngressList extensions/v1beta1, Resource=networkpolicies:NetworkPolicyList extensions/v1beta1, Resource=podsecuritypolicies:PodSecurityPolicyList extensions/v1beta1, Resource=replicasets:ReplicaSetList flowcontrol.apiserver.k8s.io/v1alpha1, Resource=flowschemas:FlowSchemaList flowcontrol.apiserver.k8s.io/v1alpha1, Resource=prioritylevelconfigurations:PriorityLevelConfigurationList flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas:FlowSchemaList flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations:PriorityLevelConfigurationList flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas:FlowSchemaList flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations:PriorityLevelConfigurationList gateway.networking.k8s.io/v1beta1, Resource=gatewaies:GatewayList gateway.networking.k8s.io/v1beta1, Resource=gatewayclasses:GatewayClassList gateway.networking.k8s.io/v1beta1, Resource=httproutes:HTTPRouteList internal.apiserver.k8s.io/v1alpha1, Resource=storageversions:StorageVersionList networking.k8s.io/v1, Resource=ingressclasses:IngressClassList networking.k8s.io/v1, Resource=ingresses:IngressList networking.k8s.io/v1, Resource=networkpolicies:NetworkPolicyList networking.k8s.io/v1beta1, Resource=ingressclasses:IngressClassList networking.k8s.io/v1beta1, Resource=ingresses:IngressList node.k8s.io/v1, Resource=runtimeclasses:RuntimeClassList node.k8s.io/v1alpha1, Resource=runtimeclasses:RuntimeClassList node.k8s.io/v1beta1, Resource=runtimeclasses:RuntimeClassList policy/v1, Resource=poddisruptionbudgets:PodDisruptionBudgetList policy/v1beta1, Resource=poddisruptionbudgets:PodDisruptionBudgetList policy/v1beta1, Resource=podsecuritypolicies:PodSecurityPolicyList rbac.authorization.k8s.io/v1, Resource=clusterrolebindings:ClusterRoleBindingList rbac.authorization.k8s.io/v1, Resource=clusterroles:ClusterRoleList rbac.authorization.k8s.io/v1, Resource=rolebindings:RoleBindingList rbac.authorization.k8s.io/v1, Resource=roles:RoleList rbac.authorization.k8s.io/v1alpha1, Resource=clusterrolebindings:ClusterRoleBindingList rbac.authorization.k8s.io/v1alpha1, Resource=clusterroles:ClusterRoleList rbac.authorization.k8s.io/v1alpha1, Resource=rolebindings:RoleBindingList rbac.authorization.k8s.io/v1alpha1, Resource=roles:RoleList rbac.authorization.k8s.io/v1beta1, Resource=clusterrolebindings:ClusterRoleBindingList rbac.authorization.k8s.io/v1beta1, Resource=clusterroles:ClusterRoleList rbac.authorization.k8s.io/v1beta1, Resource=rolebindings:RoleBindingList rbac.authorization.k8s.io/v1beta1, Resource=roles:RoleList scheduling.k8s.io/v1, Resource=priorityclasses:PriorityClassList scheduling.k8s.io/v1alpha1, Resource=priorityclasses:PriorityClassList scheduling.k8s.io/v1beta1, Resource=priorityclasses:PriorityClassList storage.k8s.io/v1, Resource=csidrivers:CSIDriverList storage.k8s.io/v1, Resource=csinodes:CSINodeList storage.k8s.io/v1, Resource=csistoragecapacities:CSIStorageCapacityList storage.k8s.io/v1, Resource=storageclasses:StorageClassList storage.k8s.io/v1, Resource=volumeattachments:VolumeAttachmentList storage.k8s.io/v1alpha1, Resource=csistoragecapacities:CSIStorageCapacityList storage.k8s.io/v1alpha1, Resource=volumeattachments:VolumeAttachmentList storage.k8s.io/v1beta1, Resource=csidrivers:CSIDriverList storage.k8s.io/v1beta1, Resource=csinodes:CSINodeList storage.k8s.io/v1beta1, Resource=csistoragecapacities:CSIStorageCapacityList storage.k8s.io/v1beta1, Resource=storageclasses:StorageClassList storage.k8s.io/v1beta1, Resource=volumeattachments:VolumeAttachmentList]
Resource=gatewaies:GatewayList gateway.networking.k8s.io/v1beta1

from k8s.io/client-go@v0.24.3/dynamic/fake/simple.go:353.

Any work arounds for this?

@pmalek
Copy link
Contributor

pmalek commented Aug 9, 2022

I tried adding a test case in https://github.com/kubernetes/gengo/blob/940203f2dae74b24bb30948aa5a9619ba259d4a5/namer/plural_namer_test.go to verify that this is indeed broken for Gateway/gateway but it doesn't look like that's the case. I've even checked several versions back to verify which version of gengo is being used by https://github.com/kubernetes/code-generator and match that with my cluster's version and that didn't yield a failure.


Ok, got it: kubernetes/kubernetes#110053

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2022
@pmalek
Copy link
Contributor

pmalek commented Nov 7, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2023
@pmalek
Copy link
Contributor

pmalek commented Feb 12, 2023

Still an issue (I'm mostly bumping this because of the problem with Gateway APIs).

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2023
@pmalek
Copy link
Contributor

pmalek commented Jun 11, 2023

There seem to be a potential way forward to fix this. I've posted kubernetes/kubernetes#110053 (comment) to ensure it's "the right way" and if that's the case I can propose something for everyone to review.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@mrueg
Copy link
Member

mrueg commented Jan 22, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2024
@pmalek
Copy link
Contributor

pmalek commented May 5, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 3, 2024
@mjnovice
Copy link

mjnovice commented Aug 9, 2024

any updates ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants