Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configMap/secretGenerator named hashes are not applied to resource that include them #1301

Closed
ptemmer opened this issue Jul 2, 2019 · 43 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.

Comments

@ptemmer
Copy link

ptemmer commented Jul 2, 2019

kustomize version: v3.0.0-pre1 and current master branch

When a namespace is specified in the base kustomization.yaml, but none in the overlay kustomization.yaml, the ouput yielded does not modify the resources referencing a configmapGenerator with it's proper name-hash.

How to reproduce:

  • Note that the deployment resource is included in the base, while the configMapGenerator is included in the overlay.

base/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: default

resources:
- test-deployment.yaml

overlay/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base/

configMapGenerator:
- name: cafe-configmap
  literals:
  - FOO=BAR

When building this overlay, the configMapRef in test-deployment.yaml is left as "cafe-configmap", without any hashed suffix.

kind: ConfigMap
metadata:
  name: cafe-configmap-66d9hf4ghg
---

deployment.yaml
...surpressed output..
      volumes:
      - configMap:
          name: cafe-configmap

If however, you either remove the namespace definition in the base, or add an arbitrary namespace in the overlay (they don't need to match), the referencing resources contain the proper name hashed value for the configmap

kind: ConfigMap
metadata:
  name: cafe-configmap-66d9hf4ghg
---

deployment.yaml
...surpressed output..
      volumes:
      - configMap:
          name: cafe-configmap-66d9hf4ghg
@linjmeyer
Copy link

I'm seeing the same thing with a project that doesn't use overlays. Without setting namespace and nameprefix my ConfigMaps are not updated in deployment. After setting both it works as expected.

@ptemmer
Copy link
Author

ptemmer commented Jul 3, 2019

Today's released v3.0.0 also fails

@ptemmer
Copy link
Author

ptemmer commented Jul 3, 2019

Does not happen with v2.1.0 though..

@ptemmer
Copy link
Author

ptemmer commented Jul 3, 2019

I cannot reproduce the issue you describe @linjmeyer

@lwille
Copy link

lwille commented Jul 5, 2019

I'm observing a similar issue in 3.0.0, when a base uses nameSuffix or namePrefix.

If I literally mention the suffixed name, kustomize doesn't append the hash neither.
When removing the nameSuffix from the base, the name reference is properly updated with the hashed name.

# base/kustomization.yml
---
nameSuffix: -suffix
configMapGenerator:
- name: bar
  files: [file2]
# overlay/kustomization.yml
---
resources:
- ../base
- podspec.yml
# overlay/podspec.yml
kind: Pod
metadata:
  name: pod
spec:
  volumes:
    - name: config-bar
      configMap:
        name: bar

expected output:

apiVersion: v1
data:
  file2: ""
kind: ConfigMap
metadata:
  name: bar-suffix-8hb7b57g9t
---
kind: Pod
metadata:
  name: pod
spec:
  volumes:
  - configMap:
      name: bar-suffix-8hb7b57g9t
    name: bar

actual output:

apiVersion: v1
data:
  file2: ""
kind: ConfigMap
metadata:
  name: bar-suffix-8hb7b57g9t
---
kind: Pod
metadata:
  name: pod
spec:
  volumes:
  - configMap:
      name: bar
    name: bar

@sethpollack
Copy link
Contributor

I'm seeing this as well, resources in a kustomization.yaml that have a namePrefix get ignored.

@Liujingfang1 Liujingfang1 added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jul 10, 2019
@jbrette
Copy link
Contributor

jbrette commented Jul 19, 2019

@ptemmer Seems to be fixed by this PR.

Can you confirm that the following tests is valid and close the bug.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: component2
  name: test-deployment
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: component2
  template:
    metadata:
      labels:
        app: component2
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - cat /etc/config/component2 && sleep 60
        image: k8s.gcr.io/busybox
        name: component2
        volumeMounts:
        - mountPath: /etc/config
          name: config-volume
      volumes:
      - configMap:
          name: cafe-configmap-bm6m88fk92
        name: config-volume

@hidekuro
Copy link

it's affects only pure k8s resources?
At least it didn't work with a CRD like the Argo-events Sensor.

@Liujingfang1
Copy link
Contributor

@hidekuro You need to add configurations in kustomization.yaml so that it knows which field to update. Take a look at this example

@hidekuro
Copy link

@Liujingfang1 thx a lot!

@0gajun
Copy link

0gajun commented Nov 8, 2019

@jbrette I think it seems not to be fixed 👀

The tests you provided works fine. However, with non-default namespace, it doesn't work.

I changed base/kustomization.yaml in the example like the following.

diff --git a/examples/issues/issue_1301/base/kustomization.yaml b/examples/issues/issue_1301/base/kustomization.yaml
index 9287a9ff..46a7624f 100644
--- a/examples/issues/issue_1301/base/kustomization.yaml
+++ b/examples/issues/issue_1301/base/kustomization.yaml
@@ -1,7 +1,7 @@
 apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization

-namespace: default
+namespace: non-default

 resources:
 - test-deployment.yaml

Then, kustomize build overlay outputs are like below.

apiVersion: v1
data:
  FOO: BAR
kind: ConfigMap
metadata:
  name: cafe-configmap-bm6m88fk92
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: component2
  name: test-deployment
  namespace: non-default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: component2
  template:
    metadata:
      labels:
        app: component2
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - cat /etc/config/component2 && sleep 60
        image: k8s.gcr.io/busybox
        name: component2
        volumeMounts:
        - mountPath: /etc/config
          name: config-volume
      volumes:
      - configMap:
          name: cafe-configmap
        name: config-volume

In this case, configMap.name in Deployment doesn't have a hash suffix.

To resolve this issue, we have to also set namespace: non-default in overlay/kustomization.yaml.

Is this expected behavior?

@jbrette
Copy link
Contributor

jbrette commented Nov 8, 2019

What is expected is that two config-map with the same name can exist in kubernetes as long as they are in different namespaces.
In your case you are indicating:

  1. generate a config-map in the default namespace,
  2. get a deployment in non-default namespace to point a to config-map in the non-default namespace. Since non-default-ns/cafe-configmap does not exist, kustomize will not touch it.
    ==> So it is expected.

To solve it, you are not forced to put everything in the overlay into a new namespace. You can if you want just put the config-map in the non-default-ns, by adding namespace: xxx just bellow the name: in the config map generator.

@0gajun
Copy link

0gajun commented Nov 11, 2019

Thanks for your really kind explanation!
I got it!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2020
@steinybot
Copy link

An alternative workaround is to specify the configMapGenerator in the base/kustomization.yaml and then use behaviour: replace in the overlay.

@wvidana
Copy link

wvidana commented Mar 23, 2020

What we ware seeing is that volume doesn't work with kubectl kustomizebut it does with the standalone kustomize.

For example, this code snipplet

      volumes:
      - configMap:
          name: cafe-configmap
        name: config-volume

Should generate

      volumes:
      - configMap:
          name: cafe-configmap-XXXXXX
        name: config-volume

But the SHA suffix only gets generated with kustomize build but not with kubectl kustomize

@priiiiit
Copy link

But the SHA suffix only gets generated with kustomize build but not with kubectl kustomize

The standalone kustomize and kubectl kustomize are different versions of Kustomize. The one bundled with kubectl is quite old.

Since [v1.14][kubectl announcement] the kustomize build system has been included in kubectl.

kubectl version kustomize version
v1.16.0 v2.0.3
v1.15.x v2.0.3
v1.14.x v2.0.3

but the latest kustomize release is v3.5.4.

@povilasv
Copy link

I have the same behaviour, with regular kustomize.

Version:

{Version:kustomize/v3.5.4 GitCommit:3af514fa9f85430f0c1557c4a0291e62112ab026 BuildDate:2020-01-11T03:12:59Z GoOs:linux GoArch:amd64}

I also tried to use compiled version from master branch, didn't help.

This is how to reproduce:

helm repo add loki https://grafana.github.io/loki/charts
helm repo update

helm template loki --namespace=sys-mon loki/loki-stack > loki.yaml

Create loki-conf.yaml, with contents auth_enabled: false.

Create a kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: sys-mon
secretGenerator:
- files:
  - loki.yaml=loki-conf.yaml
  name: loki
  behavior: replace
  namespace: sys-mon

This what gets generated:

apiVersion: v1
data:
  loki.yaml: YXV0aF9lbmFibGVkOiBmYWxzZQo=
kind: Secret
metadata:
  annotations: {}
  labels:
    app: loki
    chart: loki-0.28.0
    heritage: Helm
    release: loki
  name: loki
  namespace: sys-mon
type: Opaque

And this is the reference in Statefulset:

      volumes:
      - name: config
        secret:
          secretName: loki

Both of these don't have a hash.

@povilasv
Copy link

/remove-lifecycle stale

@jdevalk2
Copy link

jdevalk2 commented May 30, 2020

@povilasv I can confirm this is definitely still broken as I ran into the same issue

I am setting the secret generator "behavior" to "replace" mode.
This does not work currently.

I tried adding explicit need-hash annotation on my k8s Secret manifest to force.
Result: no effect, still no hash is appended to the secret name.

Regular name suffix (no hashes) seem to be applied to secrets just fine.

This is the annotation I added to my secret, without effect:

  annotations:
    kustomize.config.k8s.io/needs-hash: "true"

Maybe relevant to troubleshoot:
In documentation it states
https://github.com/kubernetes-sigs/kustomize/blob/master/docs/plugins/README.md#generator-options

"This need-hash annotation is for internal use only it will not show in the output"

This is not true in above case. It WILL always output the need-hash annotation back in the Kustomize output. This could indicate it skips the entire hash generation step on secret generator altogether.

Workaround I found: Delete the original k8s secret. Then set the secret generator mode to "Create". Observed: a new secret is created, which WILL have the hash suffix set. But this is unworkable as I need to have a Secret as part of the original deployment.

@ob1dev
Copy link

ob1dev commented Aug 25, 2020

To solve it, you are not forced to put everything in the overlay into a new namespace. You can if you want just put the config-map in the non-default-ns, by adding namespace: xxx just bellow the name: in the config map generator.

Adding namespace: xyz into kustomization.yaml helps, but only when using kustomize build ./overlay (v3.8.1). Doesn't work when using kubectl kustomize ./overlay (v1.18.8 that still comes with Kustomize v2.0.3).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@varshithreddy21
Copy link

[https://github.com/kubernetes-sigs/kustomize/blob/master/examples/generatorOptions.md]
we can disable the hash using this

@guAnsunyata
Copy link

What is expected is that two config-map with the same name can exist in kubernetes as long as they are in different namespaces.
In your case you are indicating:

  1. generate a config-map in the default namespace,
  2. get a deployment in non-default namespace to point a to config-map in the non-default namespace. Since non-default-ns/cafe-configmap does not exist, kustomize will not touch it.
    ==> So it is expected.

To solve it, you are not forced to put everything in the overlay into a new namespace. You can if you want just put the config-map in the non-default-ns, by adding namespace: xxx just bellow the name: in the config map generator.

It works!

@cschlesselmann
Copy link

I am facing this issue as well.

I am trying to override a configmap and a secret that is provided in an all-in-one deployment.
My kustomization looks like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/infracloudio/botkube/v0.12.1/deploy-all-in-one.yaml
secretGenerator:
  - name: botkube-communication-secret
    namespace: botkube
    behavior: replace
    files:
      - comm_config.yaml
configMapGenerator:
  - name: botkube-configmap
    namespace: botkube
    behavior: replace
    files:
      - resource_config.yaml

The replacement of the configmap/secret works fine but no hash is appended to the name so an update doesn't trigger a redeployment.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2021
@jketcham
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 10, 2021
@jketcham
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 10, 2021
@lily-es
Copy link

lily-es commented Feb 17, 2022

Can also confirm this issue, and can confirm that adding namespace: xyz to the overlay kustomization.yaml functions as a workaround. In my case I am using behavior:merge

base

  - name: x-configmap
    files:
      - static-data.xml=data/k8s-proddata.xml
      - data/x-config.yml

overlay

  - name: x-configmap
    behavior: merge
    files:
      - data/x-config.yml

@jimethn
Copy link

jimethn commented Mar 21, 2022

I am seeing this same problem, but specifically for CRDs.

If I have a secretGenerator, any Deployments or StatefulSets that reference the generated secret will have the reference updated with the name suffix hash. However, if a CRD references that secret, it does not get the reference updated.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2022
@jketcham
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2022
@omninonsense
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2022
@thesuperzapper
Copy link

It seems like this issue happens when you have one kustomization (the "outer") that has another kustomization as a base resource (the "inner").

When the "inner" kustomization defines a global namespace, the "outer" kustomization is unaware of this, so if a new ConfigMap is defined in the "outer" (e.g. using configMapGenerator), this ConfigMap will end up in the default namespace.

I think this is the expected behavior for kustomize, and for most users, one of the following will result in the correct output:

  1. Set a global namespace in your "outer" kustomization
  2. Set the namespace explicitly in your ConfigMap

TIP: it turns out that you can set a namespace explicitly in the configMapGenerator:

configMapGenerator:
  - name: pipeline-install-config
    namespace: my-namespace
    literals:
      - aaa=bbb

@KnVerey
Copy link
Contributor

KnVerey commented Dec 7, 2022

This issue was opened about a very old version of Kustomize. I tried reproducing the original bug report in the issue description, and I am seeing the correct result with kustomize v4.5.7:

apiVersion: v1
data:
  FOO: BAR
kind: ConfigMap
metadata:
  name: cafe-configmap-m2mg5mb749
---
apiVersion: v1
kind: Deployment
metadata:
  name: foo
  namespace: default
spec:
  template:
    spec:
      volumes:
      - configMap:
          name: cafe-configmap-m2mg5mb749 #  <--- suffixed as expected

That said, the namespace on the configmap must match the namespace on the deployment for the reference to be valid. This is the case in the original example because the value "default" is being used (and that, as you might guess, is the default, so matches blank/unspecified). The approaches @thesuperzapper recommended are what I would also suggest to ensure the overlay resources use the same namespace.

If there is a bug remaining under different circumstances than originally reported, please file a new issue. Please note that references inside custom resources cannot work out of the box; you need to use the configurations field (see this example, or search the repo for many similar issues) to teach Kustomize about those references.

/close
/triage resolved

@k8s-ci-robot k8s-ci-robot added the triage/resolved Indicates an issue has been resolved or doesn't need to be resolved. label Dec 7, 2022
@k8s-ci-robot
Copy link
Contributor

@KnVerey: Closing this issue.

In response to this:

This issue was opened about a very old version of Kustomize. I tried reproducing the original bug report in the issue description, and I am seeing the correct result with kustomize v4.5.7:

apiVersion: v1
data:
 FOO: BAR
kind: ConfigMap
metadata:
 name: cafe-configmap-m2mg5mb749
---
apiVersion: v1
kind: Deployment
metadata:
 name: foo
 namespace: default
spec:
 template:
   spec:
     volumes:
     - configMap:
         name: cafe-configmap-m2mg5mb749 #  <--- suffixed as expected

That said, the namespace on the configmap must match the namespace on the deployment for the reference to be valid. This is the case in the original example because the value "default" is being used (and that, as you might guess, is the default, so matches blank/unspecified). The approaches @thesuperzapper recommended are what I would also suggest to ensure the overlay resources use the same namespace.

If there is a bug remaining under different circumstances than originally reported, please file a new issue. Please note that references inside custom resources cannot work out of the box; you need to use the configurations field (see this example, or search the repo for many similar issues) to teach Kustomize about those references.

/close
/triage resolved

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/resolved Indicates an issue has been resolved or doesn't need to be resolved.
Projects
None yet
Development

No branches or pull requests