Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pod selector to CRD #1458

Closed
benjamin-bergia opened this issue Aug 19, 2019 · 12 comments
Closed

Add pod selector to CRD #1458

benjamin-bergia opened this issue Aug 19, 2019 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@benjamin-bergia
Copy link

HI,
I am testing the prometheus operator and it uses a CRD ServiceMonitor which requires a spec.selector.matchLabels. Currently I manage all my labels and selectors with kustomize. Some of them are defined in my base and other in my overlays. What is the recommended way to manage the selector of my CRD to keep it in sync with my other selectors? I basically want to have the exact same selector for my Service and ServiceMonitor.

@richardmarshall
Copy link
Contributor

The commonLabel transformer can be configured to inject labels into CRDs. This example shows how you can add additional types and fields that should be updated by the transformer:

https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/crd/README.md

@benjamin-bergia
Copy link
Author

Hi @richardmarshall,

Thanks for your help. I tested and it does work!

Unfortunately, I have to define the config in each overlay that use this CRD. And since for security reason kustomize doesn't let me access a config outside the directory of the kustomization.yml, I have to copy my config file in each overlay.

Is there a way to define a "global" transformer config file? If not can I add my config to my top-level kustomzation.yml and get it applied to all my nested ones (which doesn't work for me at the moment)?

I still would like to be able to define a kustomize config that is project wide, that will be easier to maintain.

@benjamin-bergia
Copy link
Author

benjamin-bergia commented Aug 20, 2019

So I followed the FAQ and created a base containing my config file and a kustomization.yml. I then import it as a base. Unfortunately is doesn't change the configuration of other bases and overlays. For example if I have:

.
├── base
│   ├── web // commonLabels: type=web
│   └── worker // commonLabels: type=worker
└── overlays
    ├── a  // commonLabels: part-of=a
    ├── b  // commonLabels: part-of=a
    └── c  // commonLabels: part-of=a

Where am I supposed to import my configuration for it to work with all this commonLabels?
If I add it in overlays/*/kustomization.yml it will only work for the part-of label. In base/*/kustomization.yml it will work for the type label only. I also tried with a base/kustomization.yml in which case it just doesn't do anything.

It does make sense that a transformer configuration in a kustomization file apply only to transformers in this kustomization file. But where do set global configurations?

@jbrette
Copy link
Contributor

jbrette commented Aug 21, 2019

@benjamin-bergia @richardmarshall Have a look here

kustomize is accumulating the transformer configs. One easy way to share configs is by having a "common" kustomize folder where you put your configurations and import that folder as resource (directly or indirectly) where you need it.

I do believe we can close this bug.

/cc @monopole @Liujingfang1

apiVersion: monitoring.coreos.com/v1alpha1
kind: ServiceMonitor
metadata:
  labels:
    part-of: b
    type: web
  name: monitor-web
spec:
  endpoints:
  - interval: 10s
    port: web
  selector:
    matchLabels:
      part-of: b
      type: web
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: ServiceMonitor
metadata:
  labels:
    part-of: b
    type: worker
  name: monitor-worker
spec:
  endpoints:
  - interval: 10s
    port: worker
  selector:
    matchLabels:
      part-of: b
      type: worker

@richardmarshall
Copy link
Contributor

@jbrette This seems like a good pattern to me. 👍

@benjamin-bergia
Copy link
Author

@jbrette When changing transformer config for a CRD it doesn't make sense to pass a config every time the CRD shows up in a kustomization.yml. This transformer config is always needed when this specific CRD is used. It is not something that you can use or not depending of the situation in your project. I should be able to specify some configuration that is global to kustomize on my computer or project.

Currently I have to ask my colleagues to remember to add the common base to everything single kustomization.yml file, this is definitely error prone and not a good pattern. The alternative is to add the common base when the CRD is used which is even worst. It is also impossible to ensure consistency when using remote resources. How can I ensure that the remote resource is not using it's own config? I would like a way to overwrite the pkg/transformers/config/defaultconfig/commonlabels.go without opening a PR or importing a base in all my kustomization files. It could either be a CLI switch or like mentioned in #1463 a config file/directory.

@richardmarshall
Copy link
Contributor

The challenge of a flag or config file approach is it introduces external factors that impact the success of a build. I actually was in the camp of favoring a config flag, so much so that I implemented it and started trying it out. Changed my mind after needing to copy my config from one system to another, setting up config injection into CI, sharing the config with collaborators, etc.

If you're using custom transformer configs you're going to need to share that information with your colleagues no matter where the information is stored. IMO putting it into the kustomization is more explicit and less likely to end up with "It worked on my system" situations.

@benjamin-bergia
Copy link
Author

@richardmarshall Agreed, the the config as a kustomization is a pretty neat idea, but not being able to specify it project wide is definitely annoying. We are currently using kustomize only (no Helm, etc) so we have different levels of overlay. Some overlays define variations of applications like worker and web nodes, other define environment specific settings like different domain names. The environment overlays are the ones that we call with kustomize build.
I was expecting that importing my configuration in these overlays would apply it to all the resources of this overlay, making it in my case project wide. But currently, it doesn't.

Currently, transformer configs seems to be the only thing in kustomize that doen't apply to the whole list of resources but just to the resources created in the file where they are set. Changing it to apply to everything created in this file and the one bellow (as is all the resources and resources of resources that it uses) does make sense and doesn't seem dangerous. It would let people choose between having their settings project wide by placing them in their top level kustomization file or apply the config to a specific transformer by putting it in the same file.

If I take back my previous example:

.
├── base
│   ├── web
│   └── worker
└── overlays
    └── a  // resources: web, worker

Configurations specified in web should apply to the transformers used in web only (currently true). Configs in a should apply to transformers in a, web and worker, overwriting any config already present in web or worker. This seem pretty natural and safe to me.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 24, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants