Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replacements do not work with fields that were set in a parent kustomization. #4099

Closed
zero-below opened this issue Jul 31, 2021 · 19 comments
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/under-consideration

Comments

@zero-below
Copy link

Describe the bug

replacements does not work if the source resource comes from above the replacements in the dependency graph. This is the case whether kustomizations come as Component or Kustomization files.

Note, the example below uses commonAnnotations to simplify the demonstration, however the same behavior occurs with static resources (such as a configmap.yaml file that sets an annotation statically) -- and it occurs with other fields, not solely metadata.annotations.

The use case:
I have a top level kustomization.yaml for each deployed environment (dev/prod/test/etc) that adds commonAnnotations to all of the resources in its tree. It then does a resources: [ ../k8s/base/ ], and in the shared kustomization, there's a replacements entry to, for example, pull a DNS_HOSTNAME from an annotation and put it into a configmap data field, and other various resource spots it needs to go.

This worked well with vars, which seems to render all of the manifests then do the var replacements afterwards. But with replacements, kustomize does not see any fields that are created above itself.

Files that can reproduce the issue
replacementsinheritance_test.go.txt

a/kustomization.yaml

kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1

commonAnnotations:
  replacement: child
 
configMapGenerator:
- name: dns-config
  options:
   disableNameSuffixHash: true
- name: deployment-config
  options:
    disableNameSuffixHash: true

replacements:
- source:
    kind: ConfigMap
    name: dns-config
    fieldPath: metadata.annotations.replacement
  targets:
  - select:
      kind: ConfigMap
      name: deployment-config
    fieldPaths:
    - data.REPLACEMENT
    options:
      create: true

b/kustomization.yaml

kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1

commonAnnotations:
  replacement: parent

resources:
- ../a

kustomize build b

Note that all of the resources have the annotation parent, but the replacements does the replacement using the prior annotations. Additionally, if you comment out the commonAnnotations from a/kustomization.yaml and then run kustomize build b, it will panic with panic: runtime error: invalid memory address or nil pointer dereference.

===== ACTUAL END ==========================================
   EXPECTED                      ACTUAL
   --------                      ------
   apiVersion: v1                apiVersion: v1
   kind: ConfigMap               kind: ConfigMap
   metadata:                     metadata:
     annotations:                  annotations:
       replacement: parent           replacement: parent
     name: dns-config              name: dns-config
   ---                           ---
   apiVersion: v1                apiVersion: v1
   data:                         data:
X    REPLACEMENT: parent           REPLACEMENT: child
   kind: ConfigMap               kind: ConfigMap
   metadata:                     metadata:
     annotations:                  annotations:
       replacement: parent           replacement: parent
     name: deployment-config       name: deployment-config

Kustomize version

{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-07-01T00:44:28+01:00 GoOs:darwin GoArch:amd64}

Platform

Darwin 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:39:59 PST 2020; root:xnu-7195.60.75~1/RELEASE_X86_64 x86_64

@zero-below zero-below added the kind/bug Categorizes issue or PR as related to a bug. label Jul 31, 2021
@k8s-ci-robot
Copy link
Contributor

@zero-below: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jul 31, 2021
@seh
Copy link
Contributor

seh commented Aug 1, 2021

I'm suffering similar problem with replacements. In my case, there's a container environment variable in a pod spec that refers to a Service by its namespace and name as ns/name. I tried to use replacements to keep this environment variable value up to date with changes to the Service's namespace and name, since nameReference can't handle that that composite value.

replacements works to a point, but it runs "too early." If I change the namespace of the Service at any point "higher" in the kustomize resource/component graph, replacements fails to take the new namespace into account. This happens even when I situate the replacements field in a Component.

@seh
Copy link
Contributor

seh commented Aug 1, 2021

Per @natasha41575's #4031 (comment), it sounds like I'm asking for too much in my example. She writes:

The replacement happens in the base - at this point the name of the source resource is source - without the prefix. If you want the prefix from the overlay to be included in the target, you must do the replacement in the overlay. This is just how kustomize works, it does transformations one layer at a time, starting at the base and moving up through the overlays. Each kustomization layer is encapsulated with no knowledge of the overlays.

In my case that's arguably possible, but it breaks encapsulation. The environment variable value I mentioned is something that I'd prefer to hide from—at at least not burden with fixing—any overlays that might change the namespace. I have packaged that use of replacements in a Component, but it seems strange to have to say to authors who might include my kustomization as a base, "If you consider changing the namespace or name of this Service, include this Component—at just the right level—to fix things up again."

[Time passes ....]

Reading #4034, @rjferguson21 shows a concise example in #4034 (comment) much line mine here. Rob shows the base as a Kustomization, but I've found that the same problem arises when including one like it as a Component as well.

@zero-below
Copy link
Author

My above example was with including files as resource entries, but the behavior is identical with component loads as well -- as far as I can tell, a replacements can only look down the tree.

But the common use for a replacements block would be to apply environment specific information into more generic modular code...and those environment specific configurations would typically occur at the top of the kustomize tree.

Also, note that the behavior is identical if the requires is in a kustomization or a component -- so the normal use case of making a top level set of environment specific settings, then loading a shared configuration with replacements, is also broken. It means that to do a requirements for an environment, the requirements block must be cut and pasted into each separate possible config.

@seh
Copy link
Contributor

seh commented Aug 2, 2021

In my case, I was trying to establish an invariant down in the base, where the Service referred to by this environment variable (in a Deployment pod spec) is defined. Much like name references in kustomize, I was trying to express that this environment variable's value should always "point" at this Service, so that if someone places the Service into a different namespace or adjusts its name, the environment variable's value would follow along accordingly.

As replacements is implemented today, It's not possible to state that invariant at any level below the one in which we change the namespace or name of that Service. @natasha41575's #4034 (comment) suggests that one day it might work more like name references.

@zero-below
Copy link
Author

And realistically -- if I want to use a field from a resource to put into another resource, I can not think of any cases I'd want to use any value other than the FINAL RENDERED value of it.

If you see my example above, we take the annotation from a configmap, and put that into the value of another configmap. But then after all that's done, the original configmap annotation is changed in the final product.

In what world does it make sense to read a field to reference somewhere else, when the source field isn't yet final rendered value of that field? It just seems to be ripe for having semi-predictable inconsistencies.

@natasha41575 natasha41575 self-assigned this Aug 4, 2021
@natasha41575
Copy link
Contributor

One of the reasons (though not the primary one) for our desire to replace vars with replacements is because vars breaks the encapsulation of each kustomization. The kustomize model is that it processes each kustomization layer one at at a time, from the bases up to the overlays. In this way, each layer is a step in the pipeline.

vars was the only transformer that was an exception; every other transformer honors this encapsulation and runs in the layer that is defined. Having an exception to the rule breaks the pipeline model and complicates things greatly.

Quoting from #2052 (comment):

This seems to be one of the biggest pain-points of using kustomize today, as it breaks the encapsulation of the kustomization and prevents the user from composing bigger kustomizations out of smaller ones.

This is what we are trying to avoid. A user should be able to reuse bases and overlays without concerns about effects on other kustomizations in their pipeline.

As replacements is implemented today, It's not possible to state that invariant at any level below the one in which we change the namespace or name of that Service. @natasha41575's #4034 (comment) suggests that one day it might work more like name references.

I am open to updating name references if the replacement source came from a resource name. This would work similarly to the name prefix and suffix transformers which are able to update name references to bases in overlays. However, I think there needs to be a very, very strong case to break kustomization encapsulation more generally.

@natasha41575 natasha41575 added triage/under-consideration and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 4, 2021
@seh
Copy link
Contributor

seh commented Aug 5, 2021

I am open to updating name references if the replacement source came from a resource name. This would work similarly to the name prefix and suffix transformers which are able to update name references to bases in overlays.

I think that's what I'm asking for here, if I understand you correctly. I want my replacements source to track the name and namespace of a Kubernetes object.

Here's my current configuration that attempts to do this, trying to update the Kong ingress controller's environment variable:

replacements:
# Ensure that the namespaced reference to Kong's proxy Service remains
# intact as the namespace or name changes.
#
# We can't configure kustomize's name reference transformer to handle
# this composite reference format.
#
# NB: This only works at this level, but not if we change the namespace or
# name at a higher level, even in a component, per the discussion in
# the following issues and PRs:
#   https://github.com/kubernetes-sigs/kustomize/issues/4034
#   https://github.com/kubernetes-sigs/kustomize/pull/4031
#
# If kustomize changes how replacements work to allow them to track
# renamings like its name reference transformer does, then we can move
# this configuration back down into the Kong base kustomization.
- source: &source
    group: ""
    kind: Service
    name: kong-proxy
  targets:
  - select: &target
      group: apps
      kind: Deployment
      name: ingress-kong
    fieldPaths:
    - &fieldPath spec.template.spec.containers.[name=ingress-controller].env.[name=CONTROLLER_PUBLISH_SERVICE].value
    options: &options
      delimiter: /
      index: 1
- source:
    << : *source
    fieldPath: metadata.namespace
  targets:
  - select: *target
    fieldPaths:
    - *fieldPath
    options:
      << : *options
      index: 0

Does my use of replacements fall within what you're willing to consider supporting? You wrote "from a resource name." Does my reference to the Service's "metadata.namespace" field count?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2021
@jbouchery
Copy link

Hi,

I'm actually facing the same problem. I'm using components to add (or not) an ingress resource to my base and construct my ingress paths with the namespace name and a label.

Here is an example :

  • ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myIngress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: namespace-app.example.com
    http:
      paths:
      - backend:
          service:
            name: myService
            port:
              name: http
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - namespace-app.example.com
  • kustomize.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

resources:
- ingress.yaml

replacements:
- source:
    kind: Deployment
    name: myDeployment
    fieldPath: metadata.namespace
  targets:
  - select:
      name: myIngress
      kind: Ingress
    fieldPaths:
    - spec.rules.0.host
    - spec.tls.0.hosts.0
    options:
      delimiter: '-'
      index: 0
- source:
    kind: Deployment
    name: myDeployment
    fieldPath: metadata.labels.app
  targets:
  - select:
      name: myIngress
      kind: Ingress
    fieldPaths:
    - spec.rules.0.host
    - spec.tls.0.hosts.0
    options:
      delimiter: '-'
      index: 1

When i'm calling my base, with my component and that i add my namespace and labels with the kustomize CLI i get an error like :

fieldPath `metadata.namespace\` is missing for replacement source ~G_~V_Deployment|~X|node-dpl:metadata.namespace

While at the same time, using vars i get the result i want :

  • ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myIngress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: $(NAMESPACE)-$(APP_NAME).example.com
    http:
      paths:
      - backend:
          service:
            name: myService
            port:
              name: http
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - $(NAMESPACE)-$(APP_NAME).example.com
  • kustomize.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

resources:
- ingress.yaml

vars:
- name: APP_NAME
  objref:
    kind: Deployment
    name: myDeployment
    apiVersion: apps/v1
  fieldref:
    fieldpath: metadata.labels.app
- name: NAMESPACE
  objref:
    kind: Deployment
    name: myDeployment
    apiVersion: apps/v1
  fieldref:
    fieldpath: metadata.namespace

I'll go with the vars for the moment even if it's planned to deprecate. But i would like some advice to handle such things with replacements.

Kustomize version
{Version:kustomize/v4.4.1 GitCommit:b2d65ddc98e09187a8e38adc27c30bab078c1dbf BuildDate:2021-11-11T23:36:27Z GoOs:linux GoArch:amd64}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 16, 2021
@andrein
Copy link

andrein commented Dec 16, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 16, 2021
@afirth
Copy link
Contributor

afirth commented Jan 26, 2022

Given that vars are deprecated it seems quite important to fix this

@kristianjaeger
Copy link

Any updates on this issue? I was considering using vars after watching, what is apparently an old kubecon talk, and sounds like replacements might not have the same capabilities?? Thanks.

@tobernguyen
Copy link

Any updates on this issue? I was considering using vars after watching, what is apparently an old kubecon talk, and sounds like replacements might not have the same capabilities?? Thanks.

Totally agree

@mihir83in
Copy link

I believe this is possible, at least in the scenarios where the replacement Source is a ConfigMap. ConfigMaps can be loaded through environment variables with envs option as described here https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator

Given a folder base with kustomization.yaml.

configMapGenerator:
- name: dynamic-repl
  envs:
  - env.required

and a file called env.required

VAR1
VAR2

when kustomize build is applied on the folder either from a different folder with resources syntax or as a standalone kustomize build, this will pass the environment variables to upstream folder.

E.g. VAR1=test1 VAR2=test2 kustomize build base or from an overlay.

Separate folder called 'test' that references from resources syntax would look like

resources:
- ../path/to/base

VAR1=test1 VAR2=test2 kustomize build test

I was looking for a solution of issue I encountered where I want to set Istio Virtual Service weight property dynamically but also keep all the changes under version control. Since all environment variables are String, this does not work for me.

Hope this is useful in your usecase though.

@natasha41575
Copy link
Contributor

natasha41575 commented Apr 13, 2022

In this issue we've concluded that replacements will not be able to reference resources outside of the current kustomization stack, but tracking name references set by replacements should be supported. We can track this issue in #4524 and #4475.

@mkyc
Copy link

mkyc commented Nov 23, 2022

This makes live so much harder not to have it.

@nourspace
Copy link

This kind of kills the point of using components to add something meaningful without having to create yet another transformer in the parent that patches the output of the component.

@natasha41575 isn't the point of components to access the RA of parents?

This is from the KEP

A kustomization that is marked as a Component has basically the same capabilities as a normal kustomization. The main distinction is that they are evaluated after the resources of the parent kustomization (overlay or component) have been accumulated, and on top of them. This means that:

The only issue here is that replacements inside the components are accessing the not yet transformed base resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/under-consideration
Projects
None yet
Development

No branches or pull requests