Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restarting the efs-csi-node Pod will cause mounts to hang on v1.6.0 #1270

Open
RyanStan opened this issue Feb 13, 2024 · 4 comments
Open

Restarting the efs-csi-node Pod will cause mounts to hang on v1.6.0 #1270

RyanStan opened this issue Feb 13, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@RyanStan
Copy link
Contributor

RyanStan commented Feb 13, 2024

/kind bug

Issue discovered with v1.6.0 of the aws-efs-csi-driver

When the EFS client mounts a file system, we re-direct a local NFS mount from the Linux Kernel to localhost, and then use a proxy process, stunnel, to receive the NFS traffic and forward it to EFS. The stunnel process runs in the efs-csi-node Pods.

Version v1.6.0 of the csi driver switched hostNetwork=true to hostNetwork=false. This means that Pods in the efs-csi-node Daemonset will launch into a new network namespace whenever they are restarted. This causes an issue. Any time these Pods are restarted, stunnel will launch in a new network namespace, while the local NFS mount from the kernel to localhost remains in the previous network namespace. This causes the mount to hang because the localhost NFS mount will not be able to reach the stunnel process once the Pod has restarted. When mounts hang, they go into uninterruptible sleep.

The issue was resolved in v1.7.0 of the driver, where we reverted the hostNetwork change, and set hostNetwork=true again. Thus, this issue only affects customers that established mounts while using v1.6.0 of the csi driver.

Work-arounds

Any attempts to upgrade or restart the v1.6.0 of the efs-csi-node Daemonset will result in EFS mounts on the node hanging.

To work-around this issue, you can launch new EKS nodes into your cluster, and then deploy a new efs-csi-node Daemonset, with hostNetwork=true, that targets these new nodes using Kubernetes Selectors. A rolling migration of your application to these new Nodes will allow you to upgrade to a new aws-efs-csi-driver version while ensuring that your application doesn't experience any downtime due to hanging mounts.

This issue was originally discovered here, but I'm making this post to raise visibility.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 13, 2024
@poblahblahblah
Copy link

We are also seeing this issue when upgrading from 1.6.0 to 1.7.5

@nkryption
Copy link

We are also seeing this issue when upgrading from 1.6.0 to 1.7.2, any resolution for this?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2024
@nkryption
Copy link

We were able to resolve this by releasing the EFS v1.7.2 by setting the UpdateStrategy type to OnDelete(to avoid EFS CSI v1.6.0 daemonset pods to restart) and then rotating all the nodes in the cluster so that the new nodes have the EFS CSI daemonset v1.7.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants