Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using overlay-provided secrets in a base #1553

Closed
atombender opened this issue Sep 21, 2019 · 18 comments
Closed

Using overlay-provided secrets in a base #1553

atombender opened this issue Sep 21, 2019 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@atombender
Copy link

atombender commented Sep 21, 2019

I have a deployment that needs multiple values from a secret. The deployment is in my base, and the secret values are defined in overlays. That's fine.

But the secret doesn't exist in the base, which means that if my deployment has this, it's making an assumption about its name:

# In base/deployment.yaml:
env:
- name: SENTRY_DNS
  valueFrom:
    secretKeyRef:
      name: myapp-whatever
      key: sentryDSN

Ideally this should be name: $(SECRET_NAME), but that isn't allowed because the secret variable is defined by the overlay. The base cannot have a secretGenerator for a secret it doesn't know about.

So my solution is to define the secret in the overlay and inject it using patches:

# In overlays/staging/kustomization.yaml:
secretGenerator:
- name: core
  type: Opaque
  envs:
  - secrets.txt

patchesJSON6902:
- target:
    group: apps
    version: v1
    kind: Deployment
    name: core
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: SENTRY_DNS
        valueFrom:
          secretKeyRef:
            name: $(SECRET_NAME)
            key: sentryDSN

vars:
- name: SECRET_NAME
  objref:
    kind: Secret
    name: core
    apiVersion: v1

This works, but that means there's much less declaration sharing going on. There's nothing dynamic here. The overlay-specific config data is in secrets.txt, but not in the envvars. The above will be identical for each overlay.

Edit: The above doesn't work. $(SECRET_NAME) is not expanded after patching.

@atombender atombender changed the title Using variables in bases Using overlay-provided secrets in a base Sep 21, 2019
@bzub
Copy link
Member

bzub commented Sep 22, 2019

Have you tried just using the name of the secret you generated (core) in the deployment?

I tested it out and everything looks good that way. In my case my secret name is (mysecret) and you can see the name gets updated in the base deployment.

$ cat base/deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    spec:
      containers:
        - name: test
          image: test
          env:
            - name: SENTRY_DNS
              valueFrom:
                secretKeyRef:
                  name: mysecret
                  key: key2

$ cat base/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deploy.yaml

$ cat overlay/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../base

secretGenerator:
  - name: mysecret
    literals:
      - key1=value1
      - key2=value2

$ kustomize build overlay

apiVersion: v1
data:
  key1: dmFsdWUx
  key2: dmFsdWUy
kind: Secret
metadata:
  name: mysecret-f4h8tk4kkg
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy
spec:
  replicas: 3
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    spec:
      containers:
      - env:
        - name: SENTRY_DNS
          valueFrom:
            secretKeyRef:
              key: key2
              name: mysecret-f4h8tk4kkg
        image: test
        name: test

@jbrette
Copy link
Contributor

jbrette commented Sep 22, 2019

You can address your problem two ways:

  • Has shown above without using variables nor patches, no special changes in kustomize. Check here
  • Following what you had in mind using, by using patch and variable. It is a little more complicated but it still works. Check here. Variables are really powerful, but often tend complicated things.

It seems you can close that bug.

@atombender
Copy link
Author

@jbrette Thanks. That alternative solution is complicated and unintuitive. Too many dependency declarations spread out between multiple files.

Where is varreference documented? Why can't it be declared inline, without configurations? I'm not sure what it does. From what I can see, it whitelists secretKeyRef.name so it is allowed to refer to a variable. If that is the case, why is this necessary? I don't have Kustomize built with the PR you mention, since it's not been merged or apparently accepted, so I'd like to understand the mechanics of it.

Where is configurations documented? Is it just a way to include YAML fragments from external files? It doesn't appear to be, since varreference can't be in kustomize.yaml.

If Kustomize is going to compete with Helm (which we currently use; I'm sounding out competitors like Kustomize as possible replacements) Kustomize has to be easier than this. Kustomize should be able to solve something simple like this in a simple, readable way.


For the record, the solution offered by @jbrette isn't really acceptable. Obviously it works, but the base would be making an assumption it shouldn't be making, and it would have a dependency on a resource (the secret) which might not even exist. For example, if an overlay sets a name prefix or generatorOptions: {disableNameSuffixHash: false}, then the secret name will not be what base/deployment.yaml expects. The point of vars, as I understand Kustomize, is to be able to extract dynamic information so you don't have such implicit dependencies, only explicit ones.

@atombender
Copy link
Author

atombender commented Sep 22, 2019

As an aside, the amount of boilerplate in each "environment" (prod, staging, etc.) is not great. Here's my current staging/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: staging

commonLabels:
  environment: staging

namePrefix: myapp-

resources:
- ../../base

generatorOptions:
  labels: {}
  annotations: {}
  disableNameSuffixHash: true

patchesJSON6902:
- target:
    group: apps
    version: v1
    kind: Deployment
    name: core
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: [... some flag ...]
    - op: add
      path: /spec/template/spec/containers/0/args/-
      value: [... some flag ...]
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: NODE_ENV
        value: staging
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: SENTRY_ENVIRONMENT
        value: staging
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: SENTRY_DSN
        valueFrom:
          secretKeyRef:
            name: $(SECRET_NAME)
            key: sentryDSN
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: STRIPE_SECRET_KEY
        valueFrom:
          secretKeyRef:
            name: $(SECRET_NAME)
            key: stripeSecretKey
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: MG_API_KEY
        valueFrom:
          secretKeyRef:
            name: $(SECRET_NAME)
            key: mailgunAPIKey
    - op: add
      path: /spec/template/spec/containers/0/env/-
      value:
        name: PGPASSWORD
        valueFrom:
          secretKeyRef:
            name: $(SECRET_NAME)
            key: postgreSQLPassword

secretGenerator:
- name: core
  type: Opaque
  envs:
  - secrets.txt
  files:
  - serviceaccount-key.json

# The following is boilerplate to allow us to refer to the secret as a variable.
# See https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_1553
# and https://github.com/kubernetes-sigs/kustomize/pull/1217.

vars:
- name: SECRET_NAME
  objref:
    kind: Secret
    name: core
    apiVersion: v1
  fieldref:
    fieldpath: metadata.name

configurations:
- ./varreference.yaml

That's 77 lines that will be duplicated for each overlay. With the exception of the namespace, common labels and _ENV vars, everything will be identical for production/kustomization.yaml.

Ideally, I'd move this into base/kustomization.yaml, but it doesn't have the secret data. From what I can tell, there's no way for the base to declare a parameter (e.g. secrets) that must be supplied by an overlay.

Of course, some of the secret data is also shared between production and staging. There doesn't seem to mechanism for this, either.

For envvars, with Helm I'd just do this directly in the deployment spec:

env:
- name: NODE_ENV
   value: {{.Values.environment}}

…and it'd be in one location. It's much more intuitive that this, where each environment has to supply that envvar (which isn't optional), which always has the same name, but a different variable depending on the environment.

@jcassee
Copy link
Contributor

jcassee commented Sep 22, 2019

@atombender Not sure if I'm making your problem easier, but I find it useful to reason about Kustomize if I assume bases are also independently usable. Even if just for spinning up a development version on a laptop. This may mean introducing dummy/fake values for this use-case.

In the case, it seems the simplest solution is to use a configMap/SecretGenerator in the base and reference it from the container envFrom. Then use a configMap/SecretGenerator in the overlay with behavior: merge. You can easily introduce new environment variables this way.

(If this requires an example, let me know.)

Also, feels like your adjustment of the Deployment would be easier with a merge patch instead.

Edit: Maybe this is more a support request than an issue? Slack might be a better channel.

@atombender
Copy link
Author

@jcassee Thanks, I didn't know about behavior: merge since there's almost no documentation for this (that key is not documented where it should be).

This solves things much better than the previous answer — that means the base can set up all the secrets, and the overlays just supply the values.

I'm not sure how to do the same for non-secret data. When developing apps, we avoid config files in favour of "12 factor" argument passing, so I need something that works with args, without using configmaps. This would otherwise force me to introduce envvars for those args just so that I can pull the data from a configmap, since args doesn't support that.

@jcassee
Copy link
Contributor

jcassee commented Sep 22, 2019

Yeah, args is more difficult. Don't really have a generic solution for you there without a specific instance of your problem.

In general, though, I have not seen many Docker images use args in the way you describe. Or they allow environment variables instead of them, like you hesitate to introduce. (For example, Python's gunicorn webserver uses a generic GUNICORN_SOME_VAR=foo = --some-var=foo translation.)

@jbrette
Copy link
Contributor

jbrette commented Sep 22, 2019

@jcassee @atombender @bzub Our project went through the exact same path of figuring out what should be done with helm and what should be done with kustomize

The first issue we encountered was the complexity of have using variables, which could only be strings and in very limited number of place. This is why we implemented this PR.

The second issue we that we could not have a variable that could be a map or list. So we implemented this PR.

The third issue was that it is really hard to have a mix of local/folder variables and global variables especially when you have a "diamond" import. Let's say you have a common folder declaring resources and variables, which is in turned imported by component1 and component2....and finally your app imports component1 and component2. So we implemented this PR

At the end, we get a lot very interesting features such as beeing able to define values.yaml like in helm, but be able to use the power of kustomize to patch them and render them.

Have a look at that example based on your usecase. You have values.yaml like in helm...but you have great features like secret generator, commonlabels, patching from kustomize.

We are coming back to some important warning in the kustomize README: Basically, not trying to use variables for everything, very often you can do it in a much simpler way using kustomize builtin transformers.

Also, I would advise you to run

mkdir -p $(GOPATH)/src/sig.k8s.io
cd $(GOPATH)/src/sig.k8s.io
git clone -b allionone https://github.com/keleustes/kustomize.git
cd kustomize
make install
cd examples/issues/issue_1553_c
$(GOPATH)/bin/kustomize build overlay/staging

Finally, our project is using a mix of kustomize, helm/armadachart. Check here

@atombender
Copy link
Author

@jbrette I agree with the challenges you addressed with those PRs, and I really like the value file idea. Thanks for creating that example! It's more explicit and less boilerplate-encumbered than what's been suggested so far, and makes much more sense to me.

That said, what are the chances that these PRs will be accepted and merged in the near future? I'm not getting any impression from the discussions what the prognosis is. I don't want to rely on third-party PRs unless I know they are definitely on the path to acceptance. It's too expensive in terms of waste of work. As it stands, Kustomize is still so new and immature that it's hard even to deal with the official version (bugs, lack of documentation, etc.), never mind an uncertain fork.

@bzub
Copy link
Member

bzub commented Sep 22, 2019

I guess I don't see what's hard or complex about using a secret name in resources that matches the name in the secretGenerator (core). In the examples posted so far I don't see any uses of $(SECRET_NAME) that wouldn't already be handled by kustomize. It's not recommended to use varRefs where kustomize already managed the name of a resource like the secret name.

@atombender
Copy link
Author

@bzub It's not about complexity. It's about being explicit about dependencies, which should only point in a single direction. Overlays should refer to bases, but bases should never refer to overlays.

As @jcassee pointed ut, it's a good idea for bases to be valid even without its overlays, which is not true about your suggestion. Putting the secret in the base and then using behavior: merge to override is a good one. That avoids using variables entirely.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 2, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 3, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mnpenner
Copy link

mnpenner commented Sep 26, 2020

I'm having pretty much this exact same problem, but the merge solution isn't working for me. Here's my setup:

k8s
├── base
│   ├── app.yaml
│   ├── kustomization.yaml
│   └── my.env
├── development
│   ├── golinks.sql
│   ├── kustomization.yaml
│   ├── mariadb.yaml
│   ├── my.cnf
│   └── my.env

base/kustomization.yaml:

namespace: go-mpen
resources:
- app.yaml
images:
- name: server
  newName: reg/proj/server
secretGenerator:
  - name: db-env
    behavior: create
    envs:
      - my.env

development/kustomization.yaml:

resources:
  - ../base
  - mariadb.yaml
configMapGenerator:
  - name: mariadb-config
    files:
      - my.cnf
  - name: initdb-config
    files:
      - golinks.sql  
secretGenerator:
  - name: db-env
    behavior: merge
    envs:
      - my.env
#patchesStrategicMerge:
#  - app.yaml

I'm using db-env in both development/mariadb.yaml and base/app.yaml.

When I try to build this, the app.yaml one gets hashed, but mariadb.yaml does not:

❯ kustomize build k8s/development | grep db-env -B20
...
---
apiVersion: v1
data:
  (secrets)
kind: Secret
metadata:
  annotations: {}
  labels: {}
  name: db-env-km5h98t84b
--
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  namespace: go-mpen
spec:
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      pod: 338f54d2-8f89-4602-a848-efcbcb63233f
  template:
    metadata:
      labels:
        pod: 338f54d2-8f89-4602-a848-efcbcb63233f
        svc: app
    spec:
      containers:
      - envFrom:
        - secretRef:
            name: db-env-km5h98t84b
--
      - name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mariadb
spec:
  replicas: 1
  selector:
    matchLabels:
      pod: bf4f837d-38d8-4a8a-b105-2f3532c0649b
  serviceName: mariadb
  template:
    metadata:
      labels:
        pod: bf4f837d-38d8-4a8a-b105-2f3532c0649b
    spec:
      containers:
      - envFrom:
        - secretRef:
            name: db-env

If I create the secret in development/kustomization.yaml then it's the other way around -- mariadb.yaml gets updated but app.yaml does not.

But I want to use the same secrets in both base/app.yaml and development/mariadb.yaml -- can I not do that?

@jdef
Copy link

jdef commented Sep 27, 2020 via email

@mnpenner
Copy link

I think that was it. Thank you @jdef !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants