-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using overlay-provided secrets in a base #1553
Comments
Have you tried just using the name of the secret you generated ( I tested it out and everything looks good that way. In my case my secret name is (
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
spec:
containers:
- name: test
image: test
env:
- name: SENTRY_DNS
valueFrom:
secretKeyRef:
name: mysecret
key: key2
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deploy.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
secretGenerator:
- name: mysecret
literals:
- key1=value1
- key2=value2
apiVersion: v1
data:
key1: dmFsdWUx
key2: dmFsdWUy
kind: Secret
metadata:
name: mysecret-f4h8tk4kkg
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy
spec:
replicas: 3
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
spec:
containers:
- env:
- name: SENTRY_DNS
valueFrom:
secretKeyRef:
key: key2
name: mysecret-f4h8tk4kkg
image: test
name: test |
You can address your problem two ways:
It seems you can close that bug. |
@jbrette Thanks. That alternative solution is complicated and unintuitive. Too many dependency declarations spread out between multiple files. Where is Where is If Kustomize is going to compete with Helm (which we currently use; I'm sounding out competitors like Kustomize as possible replacements) Kustomize has to be easier than this. Kustomize should be able to solve something simple like this in a simple, readable way. For the record, the solution offered by @jbrette isn't really acceptable. Obviously it works, but the base would be making an assumption it shouldn't be making, and it would have a dependency on a resource (the secret) which might not even exist. For example, if an overlay sets a name prefix or |
As an aside, the amount of boilerplate in each "environment" (prod, staging, etc.) is not great. Here's my current apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging
commonLabels:
environment: staging
namePrefix: myapp-
resources:
- ../../base
generatorOptions:
labels: {}
annotations: {}
disableNameSuffixHash: true
patchesJSON6902:
- target:
group: apps
version: v1
kind: Deployment
name: core
patch: |-
- op: add
path: /spec/template/spec/containers/0/args/-
value: [... some flag ...]
- op: add
path: /spec/template/spec/containers/0/args/-
value: [... some flag ...]
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: NODE_ENV
value: staging
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: SENTRY_ENVIRONMENT
value: staging
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: SENTRY_DSN
valueFrom:
secretKeyRef:
name: $(SECRET_NAME)
key: sentryDSN
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: STRIPE_SECRET_KEY
valueFrom:
secretKeyRef:
name: $(SECRET_NAME)
key: stripeSecretKey
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: MG_API_KEY
valueFrom:
secretKeyRef:
name: $(SECRET_NAME)
key: mailgunAPIKey
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: PGPASSWORD
valueFrom:
secretKeyRef:
name: $(SECRET_NAME)
key: postgreSQLPassword
secretGenerator:
- name: core
type: Opaque
envs:
- secrets.txt
files:
- serviceaccount-key.json
# The following is boilerplate to allow us to refer to the secret as a variable.
# See https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_1553
# and https://github.com/kubernetes-sigs/kustomize/pull/1217.
vars:
- name: SECRET_NAME
objref:
kind: Secret
name: core
apiVersion: v1
fieldref:
fieldpath: metadata.name
configurations:
- ./varreference.yaml That's 77 lines that will be duplicated for each overlay. With the exception of the namespace, common labels and Ideally, I'd move this into Of course, some of the secret data is also shared between production and staging. There doesn't seem to mechanism for this, either. For envvars, with Helm I'd just do this directly in the deployment spec: env:
- name: NODE_ENV
value: {{.Values.environment}} …and it'd be in one location. It's much more intuitive that this, where each environment has to supply that envvar (which isn't optional), which always has the same name, but a different variable depending on the environment. |
@atombender Not sure if I'm making your problem easier, but I find it useful to reason about Kustomize if I assume bases are also independently usable. Even if just for spinning up a development version on a laptop. This may mean introducing dummy/fake values for this use-case. In the case, it seems the simplest solution is to use a configMap/SecretGenerator in the base and reference it from the container (If this requires an example, let me know.) Also, feels like your adjustment of the Deployment would be easier with a merge patch instead. Edit: Maybe this is more a support request than an issue? Slack might be a better channel. |
@jcassee Thanks, I didn't know about This solves things much better than the previous answer — that means the base can set up all the secrets, and the overlays just supply the values. I'm not sure how to do the same for non-secret data. When developing apps, we avoid config files in favour of "12 factor" argument passing, so I need something that works with |
Yeah, In general, though, I have not seen many Docker images use |
@jcassee @atombender @bzub Our project went through the exact same path of figuring out what should be done with helm and what should be done with kustomize The first issue we encountered was the complexity of have using variables, which could only be strings and in very limited number of place. This is why we implemented this PR. The second issue we that we could not have a variable that could be a map or list. So we implemented this PR. The third issue was that it is really hard to have a mix of local/folder variables and global variables especially when you have a "diamond" import. Let's say you have a common folder declaring resources and variables, which is in turned imported by component1 and component2....and finally your app imports component1 and component2. So we implemented this PR At the end, we get a lot very interesting features such as beeing able to define values.yaml like in helm, but be able to use the power of kustomize to patch them and render them. Have a look at that example based on your usecase. You have values.yaml like in helm...but you have great features like secret generator, commonlabels, patching from kustomize. We are coming back to some important warning in the kustomize README: Basically, not trying to use variables for everything, very often you can do it in a much simpler way using kustomize builtin transformers. Also, I would advise you to run mkdir -p $(GOPATH)/src/sig.k8s.io
cd $(GOPATH)/src/sig.k8s.io
git clone -b allionone https://github.com/keleustes/kustomize.git
cd kustomize
make install
cd examples/issues/issue_1553_c
$(GOPATH)/bin/kustomize build overlay/staging Finally, our project is using a mix of kustomize, helm/armadachart. Check here |
@jbrette I agree with the challenges you addressed with those PRs, and I really like the value file idea. Thanks for creating that example! It's more explicit and less boilerplate-encumbered than what's been suggested so far, and makes much more sense to me. That said, what are the chances that these PRs will be accepted and merged in the near future? I'm not getting any impression from the discussions what the prognosis is. I don't want to rely on third-party PRs unless I know they are definitely on the path to acceptance. It's too expensive in terms of waste of work. As it stands, Kustomize is still so new and immature that it's hard even to deal with the official version (bugs, lack of documentation, etc.), never mind an uncertain fork. |
I guess I don't see what's hard or complex about using a secret name in resources that matches the name in the secretGenerator ( |
@bzub It's not about complexity. It's about being explicit about dependencies, which should only point in a single direction. Overlays should refer to bases, but bases should never refer to overlays. As @jcassee pointed ut, it's a good idea for bases to be valid even without its overlays, which is not true about your suggestion. Putting the secret in the base and then using |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm having pretty much this exact same problem, but the k8s
├── base
│ ├── app.yaml
│ ├── kustomization.yaml
│ └── my.env
├── development
│ ├── golinks.sql
│ ├── kustomization.yaml
│ ├── mariadb.yaml
│ ├── my.cnf
│ └── my.env
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: reg/proj/server
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql
secretGenerator:
- name: db-env
behavior: merge
envs:
- my.env
#patchesStrategicMerge:
# - app.yaml I'm using When I try to build this, the
If I But I want to use the same secrets in both |
Try adding a namespace to the generators in the overlay
…On Fri, Sep 25, 2020, 10:20 PM Mark Penner ***@***.***> wrote:
I'm having pretty much this exact same problem, but the merge solution
isn't working for me. Here's my setup:
k8s
├── base
│ ├── app.yaml
│ ├── kustomization.yaml
│ └── my.env
├── development
│ ├── golinks.sql
│ ├── kustomization.yaml
│ ├── mariadb.yaml
│ ├── my.cnf
│ └── my.env
base/kustomization.yaml:
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: dreg.mpen.ca/go.mpen.ca/server
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
development/kustomization.yaml:
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql
secretGenerator:
- name: db-env
behavior: merge
envs:
- my.env
#patchesStrategicMerge:
# - app.yaml
I'm using db-env in both development/mariadb.yaml and base/app.yaml.
When I try to build this, the app.yaml one gets hashed, but mariadb.yaml
does not:
❯ kustomize build k8s/development | grep db-env -B20
...
---
apiVersion: v1
data:
(secrets)
kind: Secret
metadata:
annotations: {}
labels: {}
name: db-env-km5h98t84b
--
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: go-mpen
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
template:
metadata:
labels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
svc: app
spec:
containers:
- envFrom:
- secretRef:
name: db-env-km5h98t84b
--
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
spec:
replicas: 1
selector:
matchLabels:
pod: bf4f837d-38d8-4a8a-b105-2f3532c0649b
serviceName: mariadb
template:
metadata:
labels:
pod: bf4f837d-38d8-4a8a-b105-2f3532c0649b
spec:
containers:
- envFrom:
- secretRef:
name: db-env
If I create the secret in development/kustomization.yaml then it's the
other way around -- mariadb.yaml gets updated but app.yaml does not.
But I want to use the same secrets in both base/app.yaml and
development/mariadb.yaml -- can I not do that?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1553 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLHNXGONDKK3LPTBU63SHVFXLANCNFSM4IY7PEMA>
.
|
I think that was it. Thank you @jdef ! |
I have a deployment that needs multiple values from a secret. The deployment is in my base, and the secret values are defined in overlays. That's fine.
But the secret doesn't exist in the base, which means that if my deployment has this, it's making an assumption about its name:
Ideally this should be
name: $(SECRET_NAME)
, but that isn't allowed because the secret variable is defined by the overlay. The base cannot have asecretGenerator
for a secret it doesn't know about.So my solution is to define the secret in the overlay and inject it using patches:
This works, but that means there's much less declaration sharing going on. There's nothing dynamic here. The overlay-specific config data is insecrets.txt
, but not in the envvars. The above will be identical for each overlay.Edit: The above doesn't work.
$(SECRET_NAME)
is not expanded after patching.The text was updated successfully, but these errors were encountered: