Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow excluding some label selectors from commonLabels #157

Closed
mxey opened this issue Jul 10, 2018 · 24 comments
Closed

Allow excluding some label selectors from commonLabels #157

mxey opened this issue Jul 10, 2018 · 24 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mxey
Copy link
Contributor

mxey commented Jul 10, 2018

I thought about this while implementing #140. Sometimes you will have label selectors that you don't want Kustomize to make more specific by adding the commonLabels.

The example use case for me is NetworkPolicy. I want to allow access to my application pods from my ingress controllers, but the ingress controllers are deployed separately, since they are part of the cluster infrastructure. Adding an application-specific label to those NetworkPolicy label selectors might prevent the ingress controllers from accessing my pods.

What I imagine would be a new configuration option, like this:

unmodifiedLabelPaths:
- objref:
    kind: NetworkPolicy
    name: my-application
  fieldref:
    fieldpath: spec.ingress.from.podSelector.matchLabels

Without this, the alternatives would be:

  • use lots of patches instead of commonLabels to apply my extra labels to everything but the areas I don't want
  • have a separate NetworkPolicy resource outside of the Kustomization, which is unwieldy
  • potentially generic transformation Soft-coding transformable resource structs and paths #91
@Liujingfang1
Copy link
Contributor

In #140, you added the path spec.ingress.from.podSelector.matchLabels. What if you remove this path in your PR?

@Liujingfang1 Liujingfang1 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jul 10, 2018
@mxey
Copy link
Contributor Author

mxey commented Jul 11, 2018

@Liujingfang1 I think it's more consistent if Kustomize treats all selectors the same and lets you configure exclusions. Regarding NetworkPolicy, I might have one that refers to pods in the same kustomization, like several components of an application working together, or I might have selectors that refer to pods outside. Kustomize cannot know which one I mean but it would be good to support both use cases.

@mxey
Copy link
Contributor Author

mxey commented Jul 11, 2018

I just realized that in the context of NetworkPolicy at least, I can use matchExpressions instead of matchLabels. I'm not sure if that's the way to go in general.

@ryane
Copy link
Contributor

ryane commented Jul 26, 2018

I just ran into this exact issue after upgrading kustomize to 1.0.5 (I was a little behind and running 1.0.3). I had a NetworkPolicy in a kustomization with customLabels set. The NetworkPolicy was targeting pods created outside of this kustomization. On 1.0.3, commonLabels did not affect the spec.podSelector.matchLabels nor the spec.ingress.from.podSelector.matchLabels. But, on 1.0.5, those selectors were updated with the commonLabels and no longer matched the pods the policy was supposed to apply to (the pods do not have the same labels). This effectively disabled the NetworkPolicy.

I don't know what the best solution is but I agree that it would be nice if we had a little more control here.

@Liujingfang1
Copy link
Contributor

@mxey @ryane You can solve this problem by put your kustomization in a different layout. You can create another overlay on top your current kustomization. Put the network policy as a resource in this new overlay. The commonLabels is not added here.

@mxey
Copy link
Contributor Author

mxey commented Jul 31, 2018

@Liujingfang1 I see two problems with that.

  1. I still want the labels on the network policy itself, so I can find it again, and for pruning.
  2. Adding ever more kustomization levels increases the complexity.

@monopole
Copy link
Contributor

@mxey @ryane Agree that the tradeoff is more complexity in the kustomization directives and implementation, or more complexity in the file system layout.

We just have to see what happens in practice. If folks commonly have a set of resources that should be labelled, and another set that shouldn't, then we're just talking about two subdirectories. To add/remove labels, you just move the resource from one directory to the other.

Or yes, we could add more complexity to labelling directives, and annotations, etc. Lists of group, version, kind, name, fieldpath etc.

Would like more feedback. Once new directives are offered, its hard/impossible to remove them.

@ssboisen
Copy link

ssboisen commented Sep 7, 2018

I've run into this problem as well but with Deployments. Because a Deployment spec.selector.matchLabels is an immutable field commonlabels can't be changed. One use case of changing common labels over time might be to have one that describes who to call if this particular Deployment is causing trouble or what version of the app is being deployed.

Perhaps spec.selector.* should be completely ommitted from commonLabels or perhaps what @mxey suggests is the better way to go? In any case it is a problem that commonLabels can't be changed over time.

@Liujingfang1
Copy link
Contributor

We will start a design shortly for a new approach of declaring path configs for different transformers. The desire of excluding some filed when performing one transformer will be included in this design. The design is tracked by #91

@Liujingfang1 Liujingfang1 added kind/documentation Categorizes issue or PR as related to documentation. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Oct 22, 2018
@Liujingfang1
Copy link
Contributor

We changed pathConfig to fieldSpec. Need to document how fieldSpec can work with this.

@mxey
Copy link
Contributor Author

mxey commented Nov 15, 2018

Does fieldSpec only match by GVK, or would it allow distinguishing resources based on their name too?

@Benjamintf1
Copy link
Contributor

I don't think commonLabels should add to ANY selector. apps/v1 doesn't allow changes to selectors, and changes to deployments/ds in extensions will cause orphaned daemon sets. It should be opt in to add label selectors for common labels as if you don't know what you're doing you're very liable to end up with a lot of orphaned resources, or unable to deploy at all.

@sascha-coenen
Copy link

I also think commonLabels should NEVER be added to ANY selector, just like Benjamintf1 stated.
For me however, the simple reason is that commonLabels are for the resources you want to deploy.
Selectors are meant to express which resource to connect to, so labels point to foreign objects.
Adding commonLabels to selectors is simply a bug because it tempers with a given specification.

Compare this to the following example which is conceptually identical. Imagine that the commonLabels directive would add tags to SQL statements.

commonLabels:
myTag: myValue
sql:
SELECT a, b, c FROM mytable

in this example, the sql statement as such is the resource under our auspieces and it should receive a label. A possible result could be:
SELECT /* myTag: myLabel */ a, b, c FROM mytable

But would it make sense if the labels modified the name of the target table "mytable" ?
Obviously not because that table is not our resource, not owned by us and it has been given a name/identity that is fix. By rewriting this name into "mytable_myTag=myValue" we would end up referencing a table that does not exist.
With kubernetes, its the same story. The selector is just a pointer to a foreign resource which by all intends and purposes has nothing to do with the resources which we are controlling in our current scope.

@mxey
Copy link
Contributor Author

mxey commented Jun 4, 2019

The selector is just a pointer to a foreign resource which by all intends and purposes has nothing to do with the resources which we are controlling in our current scope.

I disagree. A selector can point either to the resources that are in the Kustomization scope or those that are not. Both are valid options. If Kustomize did not extend the selectors, it could not be used to deploy multiple overlays in the same namespace.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2019
@Lawouach
Copy link

Hello,

I'm facing the same issue of matchLabels interferring with my networkpolicy definition and I'm confused by the state of things. I see they are still targeted indeed https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/commonlabels.go#L148 But I'm wondering if there is a workaround or if I have to basically drom commonLabels latogether?

Thanks,

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 23, 2019
@Lawouach
Copy link

How is this rotten?

@nlamirault
Copy link
Contributor

What ??

@Lawouach
Copy link

The robot changed the lifecycle automatically but the label is a bit harsh :)

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nlamirault
Copy link
Contributor

nlamirault commented Nov 22, 2019

/reopen :)

@k8s-ci-robot
Copy link
Contributor

@nlamirault: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests