-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
configMap/secretGenerator named hashes are not applied to resource that include them #1301
Comments
I'm seeing the same thing with a project that doesn't use overlays. Without setting |
Today's released v3.0.0 also fails |
Does not happen with v2.1.0 though.. |
I cannot reproduce the issue you describe @linjmeyer |
I'm observing a similar issue in 3.0.0, when a base uses If I literally mention the suffixed name, kustomize doesn't append the hash neither.
expected output:
actual output:
|
I'm seeing this as well, resources in a kustomization.yaml that have a |
@ptemmer Seems to be fixed by this PR. Can you confirm that the following tests is valid and close the bug. apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: component2
name: test-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: component2
template:
metadata:
labels:
app: component2
spec:
containers:
- command:
- /bin/sh
- -c
- cat /etc/config/component2 && sleep 60
image: k8s.gcr.io/busybox
name: component2
volumeMounts:
- mountPath: /etc/config
name: config-volume
volumes:
- configMap:
name: cafe-configmap-bm6m88fk92
name: config-volume |
it's affects only pure k8s resources? |
@Liujingfang1 thx a lot! |
@jbrette I think it seems not to be fixed 👀 The tests you provided works fine. However, with non-default namespace, it doesn't work. I changed diff --git a/examples/issues/issue_1301/base/kustomization.yaml b/examples/issues/issue_1301/base/kustomization.yaml
index 9287a9ff..46a7624f 100644
--- a/examples/issues/issue_1301/base/kustomization.yaml
+++ b/examples/issues/issue_1301/base/kustomization.yaml
@@ -1,7 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
-namespace: default
+namespace: non-default
resources:
- test-deployment.yaml Then, apiVersion: v1
data:
FOO: BAR
kind: ConfigMap
metadata:
name: cafe-configmap-bm6m88fk92
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: component2
name: test-deployment
namespace: non-default
spec:
replicas: 1
selector:
matchLabels:
app: component2
template:
metadata:
labels:
app: component2
spec:
containers:
- command:
- /bin/sh
- -c
- cat /etc/config/component2 && sleep 60
image: k8s.gcr.io/busybox
name: component2
volumeMounts:
- mountPath: /etc/config
name: config-volume
volumes:
- configMap:
name: cafe-configmap
name: config-volume In this case, configMap.name in Deployment doesn't have a hash suffix. To resolve this issue, we have to also set Is this expected behavior? |
What is expected is that two config-map with the same name can exist in kubernetes as long as they are in different namespaces.
To solve it, you are not forced to put everything in the overlay into a new namespace. You can if you want just put the config-map in the non-default-ns, by adding namespace: xxx just bellow the name: in the config map generator. |
Thanks for your really kind explanation! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
An alternative workaround is to specify the configMapGenerator in the |
What we ware seeing is that For example, this code snipplet volumes:
- configMap:
name: cafe-configmap
name: config-volume Should generate volumes:
- configMap:
name: cafe-configmap-XXXXXX
name: config-volume But the SHA suffix only gets generated with |
The standalone Since [v1.14][kubectl announcement] the kustomize build system has been included in kubectl.
but the latest |
I have the same behaviour, with regular Version:
I also tried to use compiled version from master branch, didn't help. This is how to reproduce:
Create Create a
This what gets generated:
And this is the reference in Statefulset:
Both of these don't have a hash. |
/remove-lifecycle stale |
@povilasv I can confirm this is definitely still broken as I ran into the same issue I am setting the secret generator "behavior" to "replace" mode. I tried adding explicit need-hash annotation on my k8s Secret manifest to force. Regular name suffix (no hashes) seem to be applied to secrets just fine. This is the annotation I added to my secret, without effect:
Maybe relevant to troubleshoot:
This is not true in above case. It WILL always output the need-hash annotation back in the Kustomize output. This could indicate it skips the entire hash generation step on secret generator altogether. Workaround I found: Delete the original k8s secret. Then set the secret generator mode to "Create". Observed: a new secret is created, which WILL have the hash suffix set. But this is unworkable as I need to have a Secret as part of the original deployment. |
Adding |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
[https://github.com/kubernetes-sigs/kustomize/blob/master/examples/generatorOptions.md] |
It works! |
I am facing this issue as well. I am trying to override a configmap and a secret that is provided in an all-in-one deployment. apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/infracloudio/botkube/v0.12.1/deploy-all-in-one.yaml
secretGenerator:
- name: botkube-communication-secret
namespace: botkube
behavior: replace
files:
- comm_config.yaml
configMapGenerator:
- name: botkube-configmap
namespace: botkube
behavior: replace
files:
- resource_config.yaml The replacement of the configmap/secret works fine but no hash is appended to the name so an update doesn't trigger a redeployment. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Can also confirm this issue, and can confirm that adding base
overlay
|
I am seeing this same problem, but specifically for CRDs. If I have a |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
It seems like this issue happens when you have one kustomization (the "outer") that has another kustomization as a base resource (the "inner"). When the "inner" kustomization defines a global I think this is the expected behavior for kustomize, and for most users, one of the following will result in the correct output:
TIP: it turns out that you can set a configMapGenerator:
- name: pipeline-install-config
namespace: my-namespace
literals:
- aaa=bbb |
This issue was opened about a very old version of Kustomize. I tried reproducing the original bug report in the issue description, and I am seeing the correct result with kustomize v4.5.7: apiVersion: v1
data:
FOO: BAR
kind: ConfigMap
metadata:
name: cafe-configmap-m2mg5mb749
---
apiVersion: v1
kind: Deployment
metadata:
name: foo
namespace: default
spec:
template:
spec:
volumes:
- configMap:
name: cafe-configmap-m2mg5mb749 # <--- suffixed as expected That said, the namespace on the configmap must match the namespace on the deployment for the reference to be valid. This is the case in the original example because the value "default" is being used (and that, as you might guess, is the default, so matches blank/unspecified). The approaches @thesuperzapper recommended are what I would also suggest to ensure the overlay resources use the same namespace. If there is a bug remaining under different circumstances than originally reported, please file a new issue. Please note that references inside custom resources cannot work out of the box; you need to use the /close |
@KnVerey: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
kustomize version: v3.0.0-pre1 and current master branch
When a namespace is specified in the base kustomization.yaml, but none in the overlay kustomization.yaml, the ouput yielded does not modify the resources referencing a configmapGenerator with it's proper name-hash.
How to reproduce:
base/kustomization.yaml
overlay/kustomization.yaml
When building this overlay, the configMapRef in test-deployment.yaml is left as "cafe-configmap", without any hashed suffix.
If however, you either remove the namespace definition in the base, or add an arbitrary namespace in the overlay (they don't need to match), the referencing resources contain the proper name hashed value for the configmap
The text was updated successfully, but these errors were encountered: