Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition between Pods gets deployed and SPC creation/updates #1436

Open
JasonXD-CS opened this issue Feb 7, 2024 · 5 comments
Open
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@JasonXD-CS
Copy link

JasonXD-CS commented Feb 7, 2024

What steps did you take and what happened:

We are seeing a sporadic failure for services installing with CSI Driver the first time where pods are failing with the following issue.
This issue also happens sometimes when we are adding new secret to existing SPC.

"MountVolume.SetUp failed for volume "<volumename>" : rpc error: code = Unknown desc = failed to get secretproviderclass <Namespace>/<SecretProviderName>, error: SecretProviderClass.secrets-store.csi.x-k8s.io "<SecretProviderName>" not found"

Pods with MountVolume Setup failure is usually recycled and resolved. However, we have seen sometimes the application code runs before the secrets are mounted, causing pods to crash and stuck at CrashLoopBackOff even after the restart.
The workaround is to delete the pod that is failing and it will be back to normal.

We did verify the yaml manifest are correctly configured, and it also succeeded in most clusters.
So we are suspecting there is a race condition where CSI Driver attempts to setup volume mount before SPC is created.
Unfortunately, we can't consistently reproduce this bug, but it happens for a lot of our services.

What did you expect to happen:
SecretProviderClass should be created before CSI Driver tries to setup volume mount.
If MountVolume setup failed, the pods should be in ContainerCreating state and retry volume mount setup,

Anything else you would like to add:
We have 20mins timeout configured for helm upgrade command but the pods didn't turn healthy before.
Currently the temporary workaround is to delete the pod when the pods stucks at MountVolume Setup failure or fail to find the secrets in volume.
But this is not ideal to have to manually go into the cluster to monitor pods when we are deploying to many clusters.

Which provider are you using:
Azure KeyVault Provider

Environment:
AzurePublicCloud

  • Secrets Store CSI Driver version: (use the image tag): mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver:v1.3.4-1
  • Kubernetes version: (use kubectl version): v1.27.7
@JasonXD-CS JasonXD-CS added the kind/bug Categorizes issue or PR as related to a bug. label Feb 7, 2024
@JasonXD-CS JasonXD-CS changed the title Race condition between volume mount setup and SPC creation Race condition between Pods gets deployed and SPC creation/updates Feb 21, 2024
@JasonXD-CS
Copy link
Author

JasonXD-CS commented Mar 14, 2024

This issue is related to the helm execution order where CustomResource is always deployed after Deployment. This cause pods deployed with outdated SPC spec when it gets created before SPC changes were applied

helm/helm#8439

@JasonXD-CS
Copy link
Author

Order of installation could be found here: https://helm.sh/docs/intro/using_helm/

To reproduce this consistently.

  1. Create new helm chart with SecretProviderClass and Deployment with csi volume, make sure new pods always gets recreated even without changes.
  2. Add a 2 dummy jobs, because Jobs are created before CustomResource and after Deployment, which guaranteed Pods creation before
  3. Deploy current chart with helm upgrade --install
  4. Add one more secret in SecretProviderClass and redeploy
  5. New Pod will be use outdated SPC configuration, without the secret added in step 4.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 13, 2024
@amirschw
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants