Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow placing synced secrets directly in the data key #1196

Closed
eherot opened this issue Mar 23, 2023 · 10 comments
Closed

Allow placing synced secrets directly in the data key #1196

eherot opened this issue Mar 23, 2023 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@eherot
Copy link

eherot commented Mar 23, 2023

Describe the solution you'd like

First, to summarize The Problem
Right now the structure of a secretObject (docs|source) forces you (as far as I could tell, anyway) to load the contents of your secret into a subkey of the data key. This is an unfortunate limitation because the Kubernetes envFrom.secretRef function does not support loading from a subKey.

What I wanted to be able to do:
My AWS Secrets Manager secret:

ENVVAR1=value1
ENVVAR2=value2

My SecretProviderClass:

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: my-secret-provider-class
spec:
  provider: aws
  secretObjects:
    - secretName: path-to-aws-secret
      type: Opaque
      data:
      - objectName: path-to-aws-secret
        key: envVars  # Would love to be able to leave this out!
  parameters:
    objects: |
      - objectName: /path/to/aws/secret
        objectType: "secretsmanager"
        objectAlias: path-to-aws-secret

For posterity, the relevant parts of my Deployment:

# ...
spec:
  template:
    spec:
      containers:
        envFrom:
          - secretRef:
              name: path-to-aws-secret
        volumeMounts:
            - name: secret-store-volume
              mountPath: /mnt/secret-store-volume
      volumes:
        - name: secret-store-volume
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: my-secret-provider-class

Expectation:

# In my container
$ echo $ENVVAR1
value1
$ echo $ENVVAR2
value2

Reality:

$ echo $ENVVAR1
$ echo $ENVVAR2
$ echo $envVars
ENVVAR1=value1
ENVVAR2=value2

Environment:

  • Secrets Store CSI Driver version: (use the image tag): v1.3.1
  • Kubernetes version: (use kubectl version): 1.23 (but AFAIK this limitation still exists on 1.26)
@eherot eherot added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 23, 2023
@estiscael
Copy link

is any way to achieve what is described in request?

@eherot
Copy link
Author

eherot commented Apr 28, 2023

Not that I've found in terms of a workaround but the fix in the code does look fairly straighforward at first glance...

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@eherot
Copy link
Author

eherot commented Jan 19, 2024 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@krzysztofantczak
Copy link

Really? no interest in this? This is really surprising to me. Because re-mapping all secret keys ie. when taken from secretmanager or whatever store manually key by key is just insane :D

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2024
@fgarciacode
Copy link

fgarciacode commented Jun 21, 2024

Hi I have the same problem I'm using Azure KevVault, but I have to import as one secret per environment variable, also I have 10 services each one with 5-10 environment variables, so I end up with so many secrets in my Azure KeyVault. It is so difficult to maintain my envs this way, I think it would be a better idea to be able to import an .env file and maintain one secret per service.

May be I'm wrong and it's an easier way to import an .env using envFrom and fill a secretObject without key, or a better way to store my envs files. If not it wold be a very nice and powerful feature.

This is my current SecretProvider class for one of my services:

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: $PROVIDER-NAME
spec:
  provider: azure
  parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "true"
    keyvaultName: $KEYVAULT_NAME
    userAssignedIdentityID: $IDENTITY_ID
    tenantId: $TENANT_ID
    objects:  |
      array:
        - |
          objectName: secret01
          objectType: secret 
        - |
          objectName: secret02
          objectType: secret
        - |
          objectName: secret03
          objectType: secret
        - |
          objectName: secret04
          objectType: secret     
        - |
          objectName: secret05
          objectType: secret
        - |
          objectName: secret06
          objectType: secret
        - |
          objectName: secret07
          objectType: secret       
        - |
          objectName: secret08
          objectType: secret
        - |
          objectName: secret09
          objectType: secret
        - |
          objectName: secret10
          objectType: secret        
        - |
          objectName: secret11
          objectType: secret        
        - |
          objectName: secret12
          objectType: secret
        - |
          objectName: secret13
          objectType: secret
        - |
          objectName: secret14
          objectType: secret
  secretObjects:
  - data:
    - objectName: secret01
      key: ENVIRONMENT01
    - objectName: secret02
      key: ENVIRONMENT02
    - objectName: secret03
      key: ENVIRONMENT03
    - objectName: secret04
      key: ENVIRONMENT04
    - objectName: secret05
      key: ENVIRONMENT05
    - objectName: secret06
      key: ENVIRONMENT06
    - objectName: secret07
      key: ENVIRONMENT07
    - objectName: secret08
      key: ENVIRONMENT08
    - objectName: secret09
      key: ENVIRONMENT09
    - objectName: secret10
      key: ENVIRONMENT10
    - objectName: secret11
      key: ENVIRONMENT11
    - objectName: secret12
      key: ENVIRONMENT12
    - objectName: secret13
      key: ENVIRONMENT13
    - objectName: secret14
      key: ENVIRONMENT14   
    secretName: $SECRET_NAME
    type: Opaque

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants