Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secret Auto Rotation not working for succeeded and failed pods #1288

Open
fabsrc opened this issue Jun 26, 2023 · 4 comments
Open

Secret Auto Rotation not working for succeeded and failed pods #1288

fabsrc opened this issue Jun 26, 2023 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@fabsrc
Copy link

fabsrc commented Jun 26, 2023

What steps did you take and what happened:

  • Installed the secrets store CSI driver with secret auto rotation enabled
  • Created a SecretProviderClass for secrets from AWS SecretsManager with Kubernetes Secret sync enabled
  • Used the SecretProviderClass in a CronJob

The first time the CronJob is triggered, everything works as expected. A Kubernetes Secret is created with the secret value from the AWS SecretsManager and the value can be used in the container environment variables.

If the secret value is now changed in the AWS SecretsManager, the next time the CronJob is triggered, the environment variable in the container is still set to the old value, as the value in the Kubernetes Secret was not updated.

What did you expect to happen:

Ideally the auto rotation would have updated the Kubernetes Secret to the current value from the AWS SecretsManager so that the container always has the latest secret value available.

Anything else you would like to add:

A workaround for this issue is to set successfulJobsHistoryLimit and failedJobsHistoryLimit in the CronJob spec to 0. That way, after a Job finishes, no succeeded or failed Pods belonging to the Job will remain in the cluster, which allows the secrets store CSI driver to delete the Kubernetes Secret and recreate it the next time the CronJob is triggered.

Looking at the code, it seems like this behaviour is intentional. Not sure why the auto rotation is skipped for succeeded and failed Pods, but for the use case described above it could make some difficulties.

Which provider are you using:
AWS

Environment:

  • Secrets Store CSI Driver version: (use the image tag): v1.3.3
  • Kubernetes version: (use kubectl version): v1.26.5
@fabsrc fabsrc added the kind/bug Categorizes issue or PR as related to a bug. label Jun 26, 2023
@ET-Torsten
Copy link

We are running in the same problem on our setup. AWS EKS and Cronjobs that try to access secrets that are rotated every N Days. Currently, our only solution is to remove the secrets once the AWS Secrets have rotated to force regeneration.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 1, 2024
@aramase
Copy link
Member

aramase commented May 1, 2024

/remove-lifecycle rotten
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants