Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when composing several identical bases that use the same var: "var ... already encountered" #1248

Closed
gobengo opened this issue Jun 25, 2019 · 20 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gobengo
Copy link

gobengo commented Jun 25, 2019

Here is a reduced test case and README: https://github.com/gobengo/kustomize-issue-7f53131c-0253-45a9-87c4-c6ab2f4d55ea

If you run kustomize build . in the root directory, you get an error like:

Error: accumulating resources: recursed merging from path '2cbace6b-c9f0-4f56-aba7-b911c0c85d48': var 'MYSQL_SERVICE' already encountered

The var referenced in the error message is defined in the kustomization.yaml of the (remote) base that both top-level bases use. https://github.com/gobengo/etherpad-lite/blob/master/lib/kubedb-mysql-etherpad-lite/kustomization.yaml#L6

The root directory kustomization simply composes the two directories in here (which are identical and both use etherpad-lite as a base).

kustomize build in either of the base directories works fine and produces a stream of yaml output (try ls -d */ | xargs -L1 kustomize build).

Expected Behavior: I can kustomize build . in the root directory and there is no error and I see the same output as concatenating the outputs of the two bases (joined with '---').

It seems like this should work. My goal is just to have the same remote kustomization running in two different namespaces. I also want the remote kustomization to surive namePrefix so it can be used twice in the same namespace (which is why I'm using vars for the service name). Is this a bug or am I missing something about how vars should work?

EDIT:

  • I tested with kustomize 2.1.0

Update 20190807: I tested with kustomize 3.1.0 and the same error happens: kubectl kustomize github.com/gobengo/kustomize-issue-7f53131c-0253-45a9-87c4-c6ab2f4d55ea.

@jbrette
Copy link
Contributor

jbrette commented Jun 26, 2019

@gobengo Please have look at: #1253

There is still a big issue to address (variable pointing at name) but the behavior seems to come along.
When reproducing your setup here, this is the kustomize build output:

apiVersion: v1
kind: Namespace
metadata:
  name: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
kind: Namespace
metadata:
  name: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
data:
  settings.json: |
    {
      "skinName":"colibris",
      "title":"Etherpad on Kubernetes w/ MySQL",
      "dbType": "${ETHERPAD_DB_TYPE:mysql}",
      "dbSettings": {
        "database": "${ETHERPAD_DB_DATABASE}",
        "host": "${ETHERPAD_DB_HOST}",
        "password": "${ETHERPAD_DB_PASSWORD}",
        "user": "${ETHERPAD_DB_USER}"
      }
    }
kind: ConfigMap
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
  namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
data:
  init.sql: |
    create database `etherpad_lite_db`;
    use `etherpad_lite_db`;

    CREATE TABLE `store` (
      `key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
      `value` longtext COLLATE utf8mb4_bin NOT NULL,
      PRIMARY KEY (`key`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
kind: ConfigMap
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad-mysql-init
  namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
---
apiVersion: v1
data:
  settings.json: |
    {
      "skinName":"colibris",
      "title":"Etherpad on Kubernetes w/ MySQL",
      "dbType": "${ETHERPAD_DB_TYPE:mysql}",
      "dbSettings": {
        "database": "${ETHERPAD_DB_DATABASE}",
        "host": "${ETHERPAD_DB_HOST}",
        "password": "${ETHERPAD_DB_PASSWORD}",
        "user": "${ETHERPAD_DB_USER}"
      }
    }
kind: ConfigMap
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
  namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
data:
  init.sql: |
    create database `etherpad_lite_db`;
    use `etherpad_lite_db`;

    CREATE TABLE `store` (
      `key` varchar(100) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
      `value` longtext COLLATE utf8mb4_bin NOT NULL,
      PRIMARY KEY (`key`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
kind: ConfigMap
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql-init
  namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
  namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
  ports:
  - name: web
    port: 80
    targetPort: web
  selector:
    app: etherpad
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
  namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
  ports:
  - name: web
    port: 80
    targetPort: web
  selector:
    app: etherpad
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
  namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
  replicas: 1
  selector:
    matchLabels:
      app: etherpad
      k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  template:
    metadata:
      labels:
        app: etherpad
        k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
    spec:
      containers:
      - env:
        - name: ETHERPAD_DB_TYPE
          value: mysql
        - name: ETHERPAD_DB_HOST
          value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
        - name: ETHERPAD_DB_DATABASE
          value: etherpad_lite_db
        - name: ETHERPAD_DB_USER
          valueFrom:
            secretKeyRef:
              key: username
              name: etherpad-mysql-auth
        - name: ETHERPAD_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: etherpad-mysql-auth
        image: etherpad/etherpad:latest
        name: etherpad
        ports:
        - containerPort: 9001
          name: web
        volumeMounts:
        - mountPath: /opt/etherpad-lite/settings.json
          name: config
          subPath: settings.json
        - mountPath: /opt/etherpad/settings.json
          name: config
          subPath: settings.json
      volumes:
      - configMap:
          name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad
        name: config
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
  namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
  replicas: 1
  selector:
    matchLabels:
      app: etherpad
      k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  template:
    metadata:
      labels:
        app: etherpad
        k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
    spec:
      containers:
      - env:
        - name: ETHERPAD_DB_TYPE
          value: mysql
        - name: ETHERPAD_DB_HOST
          value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
        - name: ETHERPAD_DB_DATABASE
          value: etherpad_lite_db
        - name: ETHERPAD_DB_USER
          valueFrom:
            secretKeyRef:
              key: username
              name: etherpad-mysql-auth
        - name: ETHERPAD_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: etherpad-mysql-auth
        image: etherpad/etherpad:latest
        name: etherpad
        ports:
        - containerPort: 9001
          name: web
        volumeMounts:
        - mountPath: /opt/etherpad-lite/settings.json
          name: config
          subPath: settings.json
        - mountPath: /opt/etherpad/settings.json
          name: config
          subPath: settings.json
      volumes:
      - configMap:
          name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad
        name: config
---
apiVersion: kubedb.com/v1alpha1
kind: MySQL
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: add961a2-b5c7-4ccd-b3c7-66f7c03c9c6e
  name: ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad-mysql
  namespace: cstmr-72n16kk2an86kc4855ujzk9a9plo293274l
spec:
  init:
    scriptSource:
      configMap:
        name: etherpad-mysql-init
  storage:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: default
  storageType: Durable
  terminationPolicy: WipeOut
  version: 5.7.25
---
apiVersion: kubedb.com/v1alpha1
kind: MySQL
metadata:
  labels:
    k8s.permanent.cloud/appInstallation.id: 45170c85-ec8b-4008-9d57-4524aa16f93f
  name: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql
  namespace: cstmr-zvyjvn35b81rkfkr87fznrpanw5op9x5yo0
spec:
  init:
    scriptSource:
      configMap:
        name: etherpad-mysql-init
  storage:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: default
  storageType: Durable
  terminationPolicy: WipeOut
  version: 5.7.25

@Liujingfang1 Liujingfang1 added the kind/bug Categorizes issue or PR as related to a bug. label Jun 26, 2019
@gobengo
Copy link
Author

gobengo commented Jun 27, 2019

@jbrette Awesome! Thanks for working on this and helping me.

I see something slightly off in that output. Tell me if this seems right.

In the output there is a Deployment with metadata.name = ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52etherpad .
It has a container with an env variable like

        - name: ETHERPAD_DB_HOST
          value: ai-w613mmojuo0qqir4pvc1l4rsr96mm6110ymetherpad-mysql

I would expect the value of this env variable (which was from the kustomize variable that used to error) to have the namePrefix ai-zv58kz2nbox64fkrqptr94nurvqoxz88o52 (the same as the Deployment that contains it). The namePrefix comes from here

The etherpad-lite kustomization is used twice, as a base in two kustomizations each with a different namePrefix. But in the output above, the namePrefix of one of those was applied in all (both) interpolations of the MYSQL_SERVICE kustomization variable. I'd expect each interpolation to use the namePrefix of the kustomization it's contained in.

Hope that makes sense or you can help me understand if I authored my sample kustomizations wrong.

@ediphy-azorab
Copy link

I would very much like to be able to build multiple overlays simultaneously from a single file - here's a slightly more minimized example:

> kustomize build https://github.com/ediphy-azorab/var_clash_example/base                                                                                                                                                                                2019/09/11 16:46:47 well-defined vars that were never replaced: MY_ENV
apiVersion: v1
data:
  MY_ENV: foo
kind: ConfigMap
metadata:
  name: example-g9bf2mfm2t

> kustomize build https://github.com/ediphy-azorab/var_clash_example/overlay1                                                                                                                                                                               
apiVersion: v1
data:
  MY_ENV: bar
kind: ConfigMap
metadata:
  annotations: {}
  labels: {}
  name: example-overlay1-d6httb926d
  namespace: overlay1
2019/09/11 16:46:54 well-defined vars that were never replaced: MY_ENV

> kustomize build https://github.com/ediphy-azorab/var_clash_example/overlay2                                                                                                                                                                               
apiVersion: v1
data:
  MY_ENV: baz
kind: ConfigMap
metadata:
  annotations: {}
  labels: {}
  name: example-overlay2-fm7c465842
  namespace: overlay2
2019/09/11 16:46:57 well-defined vars that were never replaced: MY_ENV

> kustomize build https://github.com/ediphy-azorab/var_clash_example/                                                                                                                                                                                       
Error: accumulating resources: recursed merging from path './overlay2': var 'MY_ENV' already encountered

@jbrette
Copy link
Contributor

jbrette commented Sep 12, 2019

@ediphy-azorab Have a look at that test environment

The following PR seems to be working:

kustomize build produces

apiVersion: v1
data:
  MY_ENV: bar
kind: ConfigMap
metadata:
  annotations: {}
  labels: {}
  name: example-overlay1-d6httb926d
  namespace: overlay1
---
apiVersion: v1
data:
  MY_ENV: baz
kind: ConfigMap
metadata:
  annotations: {}
  labels: {}
  name: example-overlay2-fm7c465842
  namespace: overlay2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-overlay1
  namespace: overlay1
spec:
  template:
    spec:
      containers:
      - env:
        - name: SOME_ENV
          value: bar
        name: dep
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-overlay2
  namespace: overlay2
spec:
  template:
    spec:
      containers:
      - env:
        - name: SOME_ENV
          value: baz
        name: dep

@sidps
Copy link

sidps commented Oct 25, 2019

What is the best workaround?

@tkellen
Copy link
Contributor

tkellen commented Oct 25, 2019

There isn't one gh-1620 contains the fix but we can't merge it unless we ignore the other issue it creates.

@jbrette
Copy link
Contributor

jbrette commented Oct 25, 2019

Our project is needing that feature. Also the PR had been left rotten for four months, like a lot of things we have to maintain the fork until a feature matching that need is actually implemented in kustomize.

So if you check here you will see that is actually work.

To gain access to the feature, just clone the allinone branch and run "make install".

@tkellen
Copy link
Contributor

tkellen commented Oct 26, 2019

@jbrette I'm still unable to get the use-case from gh-1600 running using your fork. Would you expect that to work? At the risk of being repetitive, here is again (it fails with the allinone branch @ b56479f):

mkdir test
cd test
mkdir -p projects/foo/manifests projects/bar/manifests environment
printf domain.com > environment/domain
printf dev > environment/name
printf -branch > environment/branch
cat <<EOF > environment/kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
  - name: environment
    files:
      - name
      - domain
      - branch
vars:
  - name: ENV
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: environment
    fieldref:
      fieldpath: data.name
  - name: DOMAIN
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: environment
    fieldref:
      fieldpath: data.domain
  - name: BRANCH
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: environment
    fieldref:
      fieldpath: data.branch
generatorOptions:
  disableNameSuffixHash: true
EOF
cat <<EOF > projects/foo/kustomization.yml
namespace: foo
resources:
  - ../../environment
  - manifests/ingress.yml
EOF
cat <<EOF > projects/bar/kustomization.yml
namespace: bar
resources:
  - ../../environment
  - manifests/ingress.yml
EOF
cat <<'EOF' > projects/bar/manifests/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: bar
spec:
  rules:
    - host: bar$(BRANCH).$(ENV).$(DOMAIN)
      http:
        paths:
        - backend:
            serviceName: bar
            servicePort: http
EOF
cat <<'EOF' > projects/foo/manifests/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo
spec:
  rules:
    - host: foo$(BRANCH).$(ENV).$(DOMAIN)
      http:
        paths:
        - backend:
            serviceName: foo
            servicePort: http
EOF
cat <<EOF > kustomization.yml
resources:
  - projects/foo
  - projects/bar
EOF
kustomize build .

I'm currently out of ideas given that gh-1620 isn't a viable solution. I could probably dive in and find some way to rectify the issue but at this point I'm more inclined to find some way to dynamically generate kustomize.yml files using a scripting language.

@jbrette
Copy link
Contributor

jbrette commented Oct 26, 2019

This is solved here

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 23, 2020
@rehevkor5
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 13, 2020
@rehevkor5
Copy link

This also happens when composing multiple different kustomizations (not just multiple identical bases) which use the same variable name. Specifically, when trying to apply kustomizations generated by https://github.com/kubeflow/kfctl by listing them as bases in a top-level kustomization.yaml, multiple of them include a variable called clusterDomain. Variables created in "sibling" kustomizations shouldn't interfere with each other.

@rehevkor5
Copy link

@jbrette It looks like your "fix" https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_1251_g only works in your custom forked version of kustomize, right?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sks1995
Copy link

sks1995 commented Oct 12, 2023

Is there any plan to fix this ?

@k8s-ci-robot
Copy link
Contributor

@sks1995: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Is there any plan to fix this ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants