Skip to content

Latest commit

 

History

History
939 lines (716 loc) · 50.8 KB

statefulset-disruption.md

File metadata and controls

939 lines (716 loc) · 50.8 KB

Graceful Shutdown for Stateful Workloads (Solving FailedAttachVolume Delays)

Workloads should start on new nodes in seconds not minutes. So why can it take minutes for disrupted stateful workloads to run on a new node?

Ideally, once a StatefulSet pod terminates, its persistent volume gets unmounted & detached from its current node, attached & mounted on its new node + pod, and the new pod should start Running all within 10-20 seconds1.

However, with the default configurations of Karpenter v0.37.0 and the EBS CSI Driver v1.31.0, disrupted statefulset pods may experience minutes of FailedAttachVolume delays before Running on their new node.

This document reviews the desired flow for disrupting of stateful workloads, describes the two separate race conditions that cause FailedAttachVolume delays for stateful workloads attempting to run on new nodes, and recommends solutions to these problems.

Disruption of Stateful Workloads Background

Karpenter Graceful Shutdown for Stateless Workloads

From Karpenter: Disruption:

"Karpenter sets a Kubernetes finalizer on each node and node claim it provisions. The finalizer blocks deletion of the node object while the Termination Controller taints and drains the node, before removing the underlying NodeClaim. Disruption is triggered by the Disruption Controller, by the user through manual disruption, or through an external system that sends a delete request to the node object."

For the scope of this document, we will focus on Karpenter's Node Termination Controller and its interactions with the terminating node.

See the following diagram for the relevant sequence of events in the case of only stateless workloads:

Stateless_Workload_Karpenter_Termination

Note: These diagrams abstract away parts of Karpenter/Kubernetes/EC2 in order to remain approachable. For example, we exclude the K8s API Server and EC2 API. Terminating Node represents both the node object and underlying EC2 Instance. For an example of what other distinctions are missing see the footnotes.2

Stateful Workloads Overview

Persistent Storage in Kubernetes involves many moving parts, most of which may not be relevant for the decision at hand.

For the purpose of this document, you should know that:

  • The Container Storage Interface (CSI) is a standard way for Container Orchestrators to provision persistent volumes from storage providers and expose block and file storage systems to containers.
  • The AttachDetach Controller watches for stateful workloads that are waiting on their storage, and ensures that their volumes are attached to the right node. Also watches for attached volumes that are no longer in use, and ensures they are detached.
  • The CSI Controller attaches/detaches volumes to nodes with workloads that require Persistent Volumes. (I.e. Calls EC2 AttachVolume). 3
  • The CSI Node Service mounts4 volumes to make them available for use by workloads. Unmounts volumes after workload terminates to ensure they are no longer in use. Runs on each node. The Kubelet's Volume Manager watches for stateful workloads and calls csi node service.
  • Mounted != Attached. Attached EBS Volume is visible as a block device by privileged user on node at /dev/<device-path>. Mounted volume is visible by workload containers at specified mountPaths. See this StackOverflow post
  • The CSI Specification states that the container orchestrator must interact with CSI Plugin through the following flow of Remote Procedure Calls when a workload requires persistent storage: ControllerPublishVolume (i.e. attach volume to node) -> NodeStageVolume (Mount volume to global node mount-point) -> NodePublishVolume (Mount volume to pod's mountpoint) (and when volume no longer in use: NodeUnbpublishVolume -> NodeUnstageVolume -> ControllerUnpublishVolume)

For the purpose of this document, assume volumes have already been created and will never be deleted.

If you want to dive one level deeper, open the dropdown to see the following diagram of what happens between pod eviction and volume detachment Stateful Pod Termination

Ideal Disruption Flow for Stateful Workloads

In order for a stateful pods to smoothly migrate from the terminating node to another node, the following steps must occur in order:

  1. Node marked for deletion
  2. Stateful pods must enter terminated state
  3. Volumes must be confirmed as unmounted (By CSI Node Service)
  4. Volumes must be confirmed as detached from instance (By AttachDetach & CSI Controllers)
  5. Karpenter terminates EC2 Instance
  6. Karpenter deletes finalizer on Node

See the following diagram for a more detailed sequence of events.

ideal

Problems

Today, with customers with default Karpenter v0.37.0 and EBS CSI Driver v1.31.0 configurations may experience two different kinds of delays once their disrupted stateful workloads are scheduled on a new node.

Problem A. If step 2 doesn't happen, there will be a 6+ minute delay.

If volumes are not confirmed as unmounted by CSI Node Service, Kubernetes cannot confirm volumes are not in use and will wait a hard-coded 6 minute MaxWaitForUnmountDuration and confirm node is unhealthy before treating the volume as unmounted. See EBS CSI 6-minute delay FAQ for more context. 5

Customers will see the following event on pod object (Note the 6+ minute delay):

Warning  FailedAttachVolume      6m51s              attachdetach-controller  Multi-Attach error for volume "pvc-123" Volume is already exclusively attached to one node and can't be attached to another

Problem B. If step 3 doesn't happen before step 4, there will be a 1+ minute delay

If Karpenter calls EC2 TerminateInstance before EC2 DetachVolume calls from EBS CSI Driver Controller pod finish, then the volumes won't be detached until the old instance terminates.This delay depends on how long it takes the underlying instance to enter the terminated state, which depends on the instance type. Typically 1 minute for m5a.large, up to 10 minutes for certain metals instances. See appendix D1 and instance termination latency measurements for more context.

Customers will see the following events (Note the 1-min delay between Multi-Attach error AND AttachVolume.Attach failed):

Warning  FailedAttachVolume      102s               attachdetach-controller  Multi-Attach error...
Warning  FailedAttachVolume      40s                attachdetach-controller  AttachVolume.Attach failed for volume "pvc-" : rpc error: code = Internal desc = Could not attach volume "vol-" to node "i-"... VolumeInUse: vol- is already attached to an instance                    
Normal   SuccessfulAttachVolume  33s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc"                                                                   

Customers can determine which delay they are suffering from based off of whether AttachVolume.Attach is in the FailedAttachVolume event.

Solutions

A1: To solve A long-term, Kubernetes should ensure volumes are unmounted before critical pods like CSI Driver Node pod are terminated.

A2: To solve A today, Karpenter should confirm that volumes are not in use and confirm AttachDetach Controller knows this before deleting the node's finalizer.

B1: To solve B today, Karpenter should wait for volumes to detach by watching volumeattachment objects before terminating the node.

See WIP Kubernetes 1.31/1.32 A1 solution in PR #125070

See a proof-of-concept implementation of A2 & B1 in PR #1294

Finally, we should add the following EBS x Karpenter end-to-end test in karpenter-provider-aws to catch regressions between releases of Karpenter or EBS CSI Driver:

  1. Deploy statefulset with 1 replica
  2. Consolidate Node
  3. Confirm replica migrated
  4. Confirm replica running within x minutes (where x is high enough to prevent flakes)

Problem A. Preventing 6+ minute delays

If ReadWriteOnce volumes are not unmounted by CSI Node Service, Kubernetes cannot confirm volumes are not in use and safe to attach to new node. Kubernetes will wait 6 minutes6 and ensure Node is unhealthy before treating the volume as unmounted and moving forward with a volume detach. See EBS CSI 6-minute delay FAQ

Cluster operator will see a FailedAttachVolume event on pod object with Multi-Attach error

When does this happen?

This delay happens when the EBS CSI Node pod is killed before it can unmount all volumes of terminated pods. Note that a pod's volumes can only be unmounted after the pod enters the terminated state.

The EBS CSI Node pod can be killed in two places, depending on whether it tolerates the karpenter.sh/disruption=disrupting taint: If the EBS CSI Node does not tolerate the taint, it will be killed during Karpenter Terminator's draining process after all pods that are not system-critical Daemonsets enter the terminated state. If the EBS CSI Node Pod does tolerate this Karpenter taint, Karpenter's Terminator will call EC2 TerminateInstances when all intolerant pods are terminated. In this case, if Graceful Shutdown is configured on the node, the Kubelet's Node Shutdown Manager will attempt to kill EBS CSI Node Pod after all non-critical pods have entered terminated state.

As of EBS CSI Driver v1.31.0, the EBS CSI Node Pod tolerates all taints be default, therefore we will focus on this second type of race for the following diagram:

6min

Karpenter's terminator cannot drain pods that tolerate its Disrupting taint. Therefore, once it drains the drainable pods, it calls EC2 TerminateInstance on a node.

However, the shutdown manager does not wait for all volumes to be unmounted, just pod termination. This leads to a race condition where the CSI Driver Node Pod is killed before all unmounts are completed. See @msau42's diagram:

kubelet_shutdown_race

Today, the EBS CSI Driver attempts to work around these races by utilizing a PreStop hook that tries to keep the Node Service alive for an additional terminationGracePeriodSeconds until all volumes are unmounted. We will explore the shortcomings of this solution later in problem A alternative solutions.

Note: In addition to the Kubelet race, this delay can happen if stateful pods out-live the CSI Node Pod. E.g. Operator has a statefulset that tolerates all taints and has a longer terminationGracePeriod than EBS CSI Driver.

Solutions:

We should:

  • A1: Fix Kubelet race condition upstream for future Kubernetes versions
  • A2: Have Karpenter taint terminated nodes as out-of-service before removing its finalizer.

A1: Fix race at Kubelet level

The Kubelet Shutdown Manager should not kill CSI Driver Pods before volumes are unmounted. This change must be made at kubernetes/kubernetes level.

Because this solution does not rely on changes in Karpenter, please see Active PR #125070 for more information on this solution.

Pros:

  • Other cluster autoscalers will not face this race condition.
  • Reduces pod migration times
  • Reduces risk of data corruption because relevant CSI Drivers will perform unmount operations required

Cons:

  • Unavailable until merged in a version of Kubernetes (Likely Kubernetes v1.31 or v1.32) (Possibly able to be backported)
  • If gracefulShutdown period is up BEFORE volumes are unmounted by CSI Node pod, then kubernetes cannot confirm volume was unmounted, and volume will still have delay. (E.g. unmount takes more than 1 minute, which is longer than gracefulShutdown period of 45 seconds.)

A2: Taint node as out-of-service after termination

While this race should be fixed at the Kubelet level long-term, we still need a solution for earlier versions of Kubernetes.

One solution is to mark terminated nodes via out-of-service taint.

In v1.26, Kubernetes enabled the Non-graceful node shutdown handling feature by default. This introduces the node.kubernetes.io/out-of-service taint, which can be used to mark a node as permanently shut down. See more context in appendix D2

Once Karpenter confirms that an instance is terminated, adding this taint to the node object will allow the Attach/Detach Controller to treat the volume as not in use, preventing the 6+ minute delay.

By modifying Karpenter to apply this taint and wait until volumes are marked not in use on node object (~5 seconds), the following sequence will occur:

taint_solution

See this commit for a proof-of-concept implementation.

Pros:

  • Solves 6+ minute delays by default
  • No additional latency before Karpenter's terminator can start instance termination.
  • Minor latency in deleting Node's finalizer IF karpenter does not treat shutting down instance as terminated. (In my tests, a 5 second wait was sufficient for AttachDetach Controller to recognize the out-of-service taint and allow for volume detach)
  • If Kubernetes makes this 6 minute ForceDetach timer infinite by default (As currently planned in Kubernetes v1.32), and the EBS CSI Node Pod is unable to unmount all volumes before improved Node Shutdown Manager times out, the out-of-service taint will be the only way to ensure workload starts on other node

Cons:

  • Only available in Kubernetes ≥ v1.26.
  • Requires Terminator to ensure instance is terminated before applying taint.
  • Problem B's delay still occurs because volumes will not be detached until consolidated instance terminates.

Alternatives Considered

Customer configuration

Customers can mitigate 6+ minute delays by configuring their nodes and pod taint tolerations. See EBS CSI Driver FAQ: Mitigating 6+ minute delays for an updating list of configuration requirements.

A quick overview:

  • Configure Kubelet for Graceful Node Shutdown
  • Enable Karpenter Spot Instance interruption handling
  • Use EBS CSI Driver ≥ v1.28 in order use the PreStop Lifecycle Hook
  • Use Karpenter ≥ v1.0.0

Pros:

  • No code change required in Karpenter.

Cons:

  • Requiring configuration is a poor customer experience because many customers will not be aware of special requirements for EBS-backed workloads with Karpenter. Troubleshooting this configuration is difficult due to the two separate attachment delay issues. (Hence why issues are still being raised on Karpenter and EBS CSI Driver Projects)
  • Only fixes problem A, the 6+ minute delay.
  • Stateful workloads that tolerate Karpenter's disrupting taint, or any system-critical stateful daemonsets with a higher terminationGracePeriod than the EBS CSI Driver will still see migration delays.

Problem B. Preventing Delayed Detachments

Even if we solve the 6+ minute volume-in-use delay, AWS customers may suffer from a second type of delay due to behavior specific to EC2.

When does this happen?

If Karpenter calls EC2 TerminateInstance before EC2 DetachVolume calls finish, then the volumes won't be detached until the old instance terminates. This delay depends on the instance type. 1 minute for m5a.large, 2 minutes for large GPU instances like g4ad.16xlarge, and 10+ minutes for certain Metal instances like m7i.metal-48xl. For more context see Appendix D1 and instance termination latencies

Operators will see FailedAttachVolume events on pod object with Multi-Attach error and then AttachVolume.Attach failed errors.

Solution B1: Wait for detach in Karpenter cloudProvider.Delete

Wait for volumes to detach before terminating the instance.

We can do this by waiting for all volumes of drain-able nodes to be marked as not be in use nor attached before terminating the node in c.cloudProvider.Delete (until a maximum of 20 seconds). See Appendix D3 for the implementation details of this wait

We can detect a volume is detached by ensuring that volumeattachment objects associated to relevant PVs are deleted. This also implies that the volume was safely unmounted by the CSI Node pod.

This means that our sequence of events will match the ideal diagram from section [Ideal Graceful Shutdown for Stateful Workloads][#ideal-graceful-shutdown-for-stateful-workloads]

We can use similar logic to today's proof-of-concept implementation, but move it to karpenter-provider-aws and check for node.Status.VolumesInUse instead of listing volumeattachment objects. A 20 second max wait was sufficient to prevent delays with m5a instance type, but further testing is needed to ensure it is enough for Windows/GPU instance types.

Pros:

  • Leaves decision to each cloud/storage provider
  • Can opt-in to this behavior for specific CSI Drivers (Perhaps via Helm parameter)
  • Only delays termination of nodes with stateful workloads.
  • Implicitly solves problem A for EBS-backed stateful workloads, if volumeattachment object is deleted before instance is terminated.

Cons:

  • Delays node termination and finalizer deletion by a worst-case of 20 seconds. (We can skip waiting on the volumes of non-drainable pods to make the average case lower)
  • Other CSI Drivers must opt-in

Alternatives Considered

Implement B1 in kubernetes-sigs/karpenter

Instead of solving this inside the c.cloudProvider.Delete, solve this inside termination controller's reconciler loop (as is done in today's proof-of-concept implementation)

Pros:

  • Karpenter-provider-aws does not need to know about Kubernetes volume lifecycle
  • Implicitly solves problem A for EBS-backed stateful workloads, if volumeattachment object is deleted before instance is terminated.

Cons:

  • [Open Question] EBS may be the only storage provider where detaching volumes before terminating the instance matters. If this is the case, the delay before instance termination is not worth it for customers of other storage/cloud providers.
  • Should not hardcode cloud-provider specific CSI Drivers in upstream project. Therefore this delay must be agreed upon by multiple drivers and cloud providers.

Karpenter Polls EC2 for volume detachments before calling EC2 TerminateInstance

Karpenter-provider-aws can poll EC2 DescribeVolumes before making an EC2 TerminateInstance call.

Pros:

  • Solves problem B
  • Implicitly solves problem A for EBS-backed stateful workloads, if volumeattachment object is deleted before instance is terminated.

Cons:

  • If volumes are not unmounted due to problem A, volumes cannot be detached anyway. Termination is the fastest way forward.
  • Karpenter has to worry about EBS volume lifecycle.

Appendix

Z. Document TODOs

  • Expand Appendix terminology and further reading sections
  • Prove what potential data corruption issues we are talking about.
  • List open questions + decision log after design review.

T. Latency Numbers

Pod termination -> volumes cleaned up

These timings come from a few manual tests. Treat them as ball-park numbers to guide our conversation, not facts.

Pod terminated -> volumes unmounted: Typically <1 second. Can be longer if volume very large (Terabytes).

unmount -> EC2 DetatchVolume called: Typically <1 second

unmount -> Volume actually Detached from linux instance: Typically 5-10 seconds. Can take longer because no SLA from EBS

pod termination -> Karpenter can safely call EC2 TerminateInstances: ~10 seconds. (If EC2 DetatchVolume is made enough ahead of EC2TerminateInstances, we are fine for many instance types. It's only if they're within a few seconds of each other that we run into the problem B race.)

Instance stopped/terminated times

These are manual tests measured by polling EC2 DescribeInstances performed in June 2024 in us-west-2. Treat them as ball-park numbers to guide our conversation, not facts.

Times are in Minutes:Seconds

  • m5.large

    • stopped ~40s
    • terminated ~55s
  • c5.12xlarge -- Windows 2022 AMI

    • Stopped ~30s
    • Terminated ~1:15
  • c5.metal

    • stopped ~10:55
    • terminated ~10:54
  • g4ad.xlarge (Linux GPU AMI)

    • stopped ~53s
    • terminated ~57s
  • g4ad.4xlarge (Linux GPU AMI)

    • stopped ~53s
    • terminated ~1:37
  • g4ad.16xlarge (Linux GPU AMI)

    • stopped ~2:00
    • terminated ~2:10

Windows instances with elastic GPUs are reported to have slow termination times, but this has yet to be tested.

A. Further Reading

B. Terminology

  • Daemonset: Pod scheduled on every Node.
  • EBS: Elastic Block Store.
  • Kubelet: Primary 'node agent' that runs on each node. Volume Manager service in Kubelet makes gRPC to EBS CSI Node pod.
  • StatefulSet: Manages the deployment and scaling of a set of Pods, and provides gauruntees about the ordering and uniqueness of these Pods. These uniqueness gauruntees are valuable when your workload needs persistent storage.

C. Related Issues

D. Additional Context

D1. EC2 Termination + EC2 DetachVolume relationship additional context

If EC2 API reacts to an EC2 TerminateInstances call before EC2 DetachVolumes, the following may occur:

  1. Karpenter invokes TerminateInstances
  2. EC2 notifies the guest OS that it needs to shut down.
  3. The guest OS can take a long time to complete shutting down.
  4. In the meantime, the CSI driver was informed of volume no longer in use and attempts to detach the volumes.
  5. The detach workflow is blocked because the OS is shutting down.
  6. Once the guest OS finally finishes shutting down, AWS EC2 cleans up instance.
  7. Then the detach workflows are unblocked and are no-ops because instance is already terminated.
  8. EBS CSI Controller is able to attach volume to new instance

D2. Non-Graceful Shutdown + out-of-service taint additional context

When was out-of-service taint added?

Added as part of Non-graceful node shutdown handling feature, default-on in Kubernetes v1.26, stable in v1.28.

See

From documentation:

When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods that are part of a StatefulSet will be stuck in terminating status on the shutdown node and cannot move to a new running node. This is because kubelet on the shutdown node is not available to delete the pods so the StatefulSet cannot create a new pod with the same name. If there are volumes used by the pods, the VolumeAttachments will not be deleted from the original shutdown node so the volumes used by these pods cannot be attached to a new running node. As a result, the application running on the StatefulSet cannot function properly. If the original shutdown node comes up, the pods will be deleted by kubelet and new pods will be created on a different running node. If the original shutdown node does not come up, these pods will be stuck in terminating status on the shutdown node forever.

To mitigate the above situation, a user can manually add the taint node.kubernetes.io/out-of-service with either NoExecute or NoSchedule effect to a Node marking it out-of-service. If the NodeOutOfServiceVolumeDetachfeature gate is enabled on kube-controller-manager, and a Node is marked out-of-service with this taint, the pods on the node will be forcefully deleted if there are no matching tolerations on it and volume detach operations for the pods terminating on the node will happen immediately. This allows the Pods on the out-of-service node to recover quickly on a different node.

During a non-graceful shutdown, Pods are terminated in the two phases:

1. Force delete the Pods that do not have matching out-of-service tolerations.
2. Immediately perform detach volume operation for such pods.

Note:
- Before adding the taint node.kubernetes.io/out-of-service, it should be verified that the node is already in shutdown or power off state (not in the middle of restarting).
- The user is required to manually remove the out-of-service taint after the pods are moved to a new node and the user has checked that the shutdown node has been recovered since the user was the one who originally added the taint.
Where is out-of-service taint used in k/k?

Searching Kubernetes/Kubernetes, I found the out-of-service taint referenced in the following places:

  • AttachDetach controller will trigger volume detach even if state of Kubernetes thinks volume mounted by node. Seen here. (As of Kubernetes 1.30 detach trigger will happen after a 6 min forceDetach timer. There are plans to turn off this timer by default in Kubernetes 1.32, which will mean volumes will never be detached without a successful CSI NodeUnstage / NodeUnpublish)

  • Pod Garbage Collection Controller garbage collect pods that are terminating on not-ready node with out-of-service taint (will add pods on nodes to terminatingPods list) here

  • Various metrics like PodGCReasonTerminated

  • GCE has upstream e2e tests on this feature here

Is the out-of-service taint safe to use?

The out-of-service taint is confirmed safe to use with EBS-backed stateful workloads. This is because even if AttachDetach Controller issues a forceDetach, the EBS CSI Controller's EC2 DetachVolume call cannot detach a mounted volume in the case instance is still running, and by the time instance is terminated volume is already in detached without an EC2 DetachVolume call.

Open Question: However, this may not be true for all CSI Drivers. There may be certain CSI Drivers that expect NodeUnstage and NodeUnpublish to be called before ControllerUnpublish, because they perform additional logic outside of typical io flushes and unmount syscalls.

We can perhaps consider a version of this solution that lives in karpenter-provider-aws AND only applies taint if all volumeattachment objects left on node are associated with EBS CSI Driver.

What changes in Kubernetes due to this Node Ungraceful Shutdown feature?

From KEP 2268, proposed pre-KEP and post-KEP logic:

Existing logic:

1. When a node is not reachable from the control plane, the health check in Node lifecycle controller, part of kube-controller-manager, sets Node v1.NodeReady Condition to False or Unknown (unreachable) if lease is not renewed for a specific grace period. Node Status becomes NotReady.

2. After 300 seconds (default), the Taint Manager tries to delete Pods on the Node after detecting that the Node is NotReady. The Pods will be stuck in terminating status.

Proposed logic change:

1. [Proposed change] This proposal requires a user to apply a out-of-service taint on a node when the user has confirmed that this node is shutdown or in a non-recoverable state due to the hardware failure or broken OS. Note that user should only add this taint if the node is not coming back at least for some time. If the node is in the middle of restarting, this taint should not be used.

2. [Proposed change] In the Pod GC Controller, part of the kube-controller-manager, add a new function called gcTerminating. This function would need to go through all the Pods in terminating state, verify that the node the pod scheduled on is NotReady. If so, do the following:

3. Upon seeing the out-of-service taint, the Pod GC Controller will forcefully delete the pods on the node if there are no matching tolation on the pods. This new out-of-service taint has NoExecute effect, meaning the pod will be evicted and a new pod will not schedule on the shutdown node unless it has a matching toleration. For example, node.kubernetes.io/out-of-service=out-of-service=nodeshutdown:NoExecute or node.kubernetes.io/out-of-service=out-of-service=hardwarefailure:NoExecute. We suggest to use NoExecute effect in taint to make sure pods will be evicted (deleted) and fail over to other nodes.

4. We'll follow taint and toleration policy. If a pod is set to tolerate all taints and effects, that means user does NOT want to evict pods when node is not ready. So GC controller will filter out those pods and only forcefully delete pods that do not have a matching toleration. If your pod tolerates the out-of-service taint, then it will not be terminated by the taint logic, therefore none of this applies.

5. [Proposed change] Once pods are selected and forcefully deleted, the attachdetach reconciler should check the out-of-service taint on the node. If the taint is present, the attachdetach reconciler will not wait for 6 minutes to do force detach. Instead it will force detach right away and allow volumeAttachment to be deleted.

6. This would trigger the deletion of the volumeAttachment objects. For CSI drivers, this would allow ControllerUnpublishVolume to happen without NodeUnpublishVolume and/or NodeUnstageVolume being called first. Note that there is no additional code changes required for this step. This happens automatically after the Proposed change in the previous step to force detach right away.

7. When the external-attacher detects the volumeAttachment object is being deleted, it calls CSI driver's ControllerUnpublishVolume.

D3: WaitForVolumeDetachments Implementation Details

A proof-of-concept implementation can be seen in PR 1294

WaitForVolumeDetachments can cause reconciler requeues until either all detachable EBS-managed volumeattachment objects are deleted, or a max timeout has been reached. As jmdeal@ suggested, we can either:

  • Add an annotation to the node to indicate when the drain attempt to begin. Continue to reconcile until an upper time limit has been hit.
  • Do the same thing but with an in-memory map from node to timestamp.

An in-memory map would mean no new annotations on the node, but would not persist across Karpenter controller restarts.

E. Reproduction Manifests

Deploy Karpenter v0.37.0 and EBS CSI Driver 1.31.0 to your cluster

Apply the following manifest to have a stateful pod migrate from an expiring node every 3 minutes.
---
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
  name: general-purpose
  annotations:
    kubernetes.io/description: "General purpose NodePool for generic workloads"
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
      nodeClassRef:
        apiVersion: karpenter.k8s.aws/v1beta1
        kind: EC2NodeClass
        name: default
  disruption:
    consolidationPolicy: WhenUnderutilized
    expireAfter: 3m
---
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
  name: default
  annotations:
    kubernetes.io/description: "General purpose EC2NodeClass for running Amazon Linux 2 nodes"
spec:
  amiFamily: AL2 # Amazon Linux 2
  role: "KarpenterNodeRole-karpenter-demo" # replace with your cluster name
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "karpenter-demo" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "karpenter-demo" # replace with your cluster name
  userData: |
    MIME-Version: 1.0
    Content-Type: multipart/mixed; boundary="BOUNDARY"

    --BOUNDARY
    Content-Type: text/x-shellscript; charset="us-ascii"

    #!/bin/bash
    echo -e "InhibitDelayMaxSec=45\n" >> /etc/systemd/logind.conf
    systemctl restart systemd-logind
    echo "$(jq ".shutdownGracePeriod=\"45s\"" /etc/kubernetes/kubelet/kubelet-config.json)" > /etc/kubernetes/kubelet/kubelet-config.json
    echo "$(jq ".shutdownGracePeriodCriticalPods=\"15s\"" /etc/kubernetes/kubelet/kubelet-config.json)" > /etc/kubernetes/kubelet/kubelet-config.json
    --BOUNDARY--
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.k8s.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
      nodeSelector:
        karpenter.sh/nodepool: general-purpose
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: "topology.kubernetes.io/zone"
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            app: nginx
  volumeClaimTemplates:
  - metadata:
      name: www
      labels:
        roar: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Delete

F: Sequence Diagrams

Raw code for sequence diagrams

Simplified Stateless Termination

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node (VM)
    
    Karp->>+Old: Drain
    Old->>-Karp: Pods Terminated.
    Karp->>+Old: EC2 TerminateInstance
    Old->>Old: Shutting Down
    Old->>-Karp: Terminated
    Karp-->>Old: Remove Finalizer.

Complicated Stateless Disruption

sequenceDiagram
    participant Old as Old Node (VM)
    participant Kub as Kubernetes CP
    participant Karp as Karpenter Terminator
    participant EC2 as EC2 API
    
    Kub->>+Karp: Old Node marked for deletion
    Karp->>Karp: Old terminator start
    Karp->>Kub: Taint Old `Disrupting:NoSchedule`
    Karp->>+Kub: Drain Old Node
    Kub->>+Old: Terminate Pods
    Old->>-Kub: Pods Terminated
    Kub->>-Karp: Drain Done
    Karp->>+EC2: Terminate Old Node
    EC2->>+Old: Shut Down
    EC2->>-Karp: Old ShuttingDown
    Karp->>-Kub: Remove Old Node Finalizer
    Kub-->Old: Lost Communication
    Old->-EC2: Terminated
    destroy Old

Consolidation Event: Stateful Today (Default EBS CSI Driver configuration)

participant EN as CSI Node Pod
    participant Old as Old Node (VM)
    participant Kub as Kubernetes CP
    participant Karp as Karpenter Terminator
    participant EC2 as EC2 API
    
    Kub->>+Karp: Old Node marked for deletion
    Karp->>Karp: Old terminator start
    Karp->>+Kub: Drain Old Node
    Kub->>+Old: Terminate Stateful Pod
    Old->>Kub: Stateful Pod Terminated
    Kub->>-Karp: Drain Done
    Old->>+EN: Unmount Volume
    EN->>-Old: Unmounted Volume
    Old->>-Kub: Safe to detach Volume
    Karp->>+EC2: Terminate Old Node
    EC2->>+Old: Shut Down
    EC2->>-Karp: Old ShuttingDown
    Karp->>Kub: Remove Old Node Finalizer
    Kub-->Old: Lost Communication
    Old->-EC2: Terminated

Stateful Workload Termination–Ideal Case:

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node
    participant CN as CSI Node Service
    participant AT as AttachDetach Controller
    participant NN as New Node
    
    note left of Karp: 0. Deletion Marked
    Karp->>+Old: Drain
    Old->>Old: Intolerant Stateful Pod Terminated
    note left of Karp: 1. Pods Terminated
    Old-->NN: Pod Rescheduled
    Old->>-Karp: Drain Complete
    NN->>+NN: Stateful Pod ContainerCreating
    NN->>+AT: Where is my Volume?
    AT->>AT: Volume Still In Use
    note left of Karp: 2. Volumes unmount
    Old->>+CN: Unmount Volume
    CN->>-Old: Unmounted Volume
    Old->>+AT: Volume Not In Use
    note left of Karp: 3. Volumes detached
    AT->>-Old: EC2 Detach Volume
    Old->>AT: EC2 Detached Volume
    Note right of Karp: Waited for volume detach
    note left of Karp: 4. Terminate Instance
    Karp->>+Old: EC2 TerminateInstance
    Note right of NN: ~15s delay (EC2 detach + attach)
    AT->>NN: EC2 Attach Volume
    NN->>AT: EC2 Attached Volume
    AT->>-NN: Your Volume is Ready
    NN->>-NN: Pod Running
    Old->>Old: Shutting Down
    destroy CN
    Old->>CN: Kubelet Kills
    Old->>-Karp: EC2 Terminated
    note left of Karp: 5. Remove Finalizer
    Karp-->>Old: Remove Finalizer

Delay until instance terminated (1 min delay)

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node
    participant AT as AttachDetach Controller
    participant NN as New Node
    
    Note left of Karp: 0. Deletion Marked
    Karp->>+Old: Drain
    Old->>Old: Intolerant Stateful Pod Terminated
    Note left of Karp: 1. Pods Terminated
    Old-->NN: Pod Rescheduled
    NN->+NN: Stateful Pod ContainerCreating
    NN->>+AT: Where is my Volume?
    AT->>AT: Volume Still In Use
    Old->>Karp: Drain Complete
    Old->>Old: CSI Node Pod Unmounted Volume
    Note left of Karp: 2. Unmount Volume
    Note right of Karp: Karpenter waits for unmount
    Note left of Karp: 4. Terminate Instance
    par
        Karp->>+Old: EC2 TerminateInstance
    and 
        Old->>AT: Volume Not In Use
        AT->>Old: EC2 Detach Volume
    end
    Old->>Old: Shutting Down
    Note right of Old: Volume detach delayed until terminated.    
    Old->>AT: Termination Detached Volume
    Note right of NN: ~1m delay (EC2 Termination)*
    Old->>-Karp: EC2 Terminated
    AT->>NN: EC2 Attach Volume
    NN->>AT: EC2 Attached Volume
    AT->>-NN: Your Volume is Ready
    NN->>-NN: Pod Running
    Note left of Karp: 5. Remove Finalizer
    Karp-->>Old: Remove Finalizer

force detach 6 min delay

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node
    participant CN as CSI Node Service
    participant AT as AttachDetach Controller
    participant NN as New Node
    
    Note left of Karp: 0. Deletion Marked
    Karp->>+Old: Drain
    Old->>Old: All intolerant pods terminated
    Old->>-Karp: Drain Complete
    Note left of Karp: 4. Terminate Instance
    Karp->>+Old: EC2 TerminateInstance
    Old->>Old: Shutting Down
    Old->>Karp: EC2 Shutting Down
    par
    destroy CN 
    Old->>CN: Killed
    and
    Old->>Old: Kill Tolerant Pods
    end
    Old--xCN: Unknown if Volume Unmounted
    Old-->NN: Pod Rescheduled
    
    Note left of Karp: 5. Remove Finalizer
    destroy Old
    Karp-->>Old: Remove Finalizer
    NN->>+NN: Stateful Pod ContainerCreating
    NN->>+AT: Where is my Volume?
    AT--xOld: Unknown if Volume in use
    AT-->AT: Wait 6 Min ForceDetach Timer
    Old->Old: Shutdown Manager unmounts
    Old->Old: Instance Terminated
    AT->>AT: Volume Force Detached
    Note right of NN: 6+ min delay (K8s ForceDetach Timer)
    AT->>NN: EC2 Attach Volume
    NN->>AT: EC2 Attached Volume
    AT->>-NN: Your Volume is Ready
    NN->>-NN: Pod Running

Taint post shutdown 2 min delay

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node
    participant CN as CSI Node Service
    participant AT as AttachDetach Controller
    participant NN as New Node
    
    Note left of Karp: 0. Mark Deletion
    Karp->>+Old: Drain
    par
        destroy CN 
        Old->>CN: Terminated
        Old--xCN: Unknown if Volume Unmounted
    and
        Old->>Old: All Intolerant Pods Terminated
    end
    Note left of Karp: 1. Pods Terminated
    Old->>-Karp: Drain Complete
    Old-->NN: Pod Rescheduled
    NN->>+NN: Stateful Pod ContainerCreating
    NN->>+AT: Where is my Volume?
    AT-->Old: Unknown if Volume not in use
    Note left of Karp: 4. Terminate Instance
    Karp->>+Old: EC2 TerminateInstance
    Old->>Old: Shutting Down / ShutdownManager Unmounts
    Note right of NN:  ~1m delay (EC2 Termination)*
    Old->>-Karp: EC2 Terminated
    Note left of Karp: Solution A2
    Karp->>Old: Taint out-of-service
    Old->>AT: Taint confirms Volume not in use
    Karp->>Karp: Wait until taint seen (~5s)
    destroy Old
    Note left of Karp: 5. Delete Finalizer
    Karp-->>Old: Remove Finalizer
    AT->>NN: EC2 Attach Volume
    NN->>AT: EC2 Attached Volume
    AT->>-NN: Your Volume is Ready
    NN->>-NN: Pod Running
    

Alt Taint post shutdown delay

sequenceDiagram
    participant Karp as Karpenter Terminator
    participant Old as Consolidating Node
    participant CN as CSI Node Service
    participant AT as AttachDetach Controller
    participant NN as New Node
    
    Note left of Karp: 0. Mark Deletion
    Karp->>+Old: Drain
    Old->>Old: All Intolerant Pods Terminated

    Old->>-Karp: Drain Complete
    
    
    Note left of Karp: 4. Terminate Instance
    Karp->>+Old: EC2 TerminateInstance
    Old->>Old: Shutting Down 
    par
        destroy CN 
        Old->>CN: Terminated
        Old--xCN: Unknown if Volume Unmounted
    and
       Old->>Old: Tolerant Pods Killed
    end
    Old-->NN: Pod Rescheduled
    NN->>+NN: Stateful Pod ContainerCreating
    NN->>+AT: Where is my Volume?
    AT-->Old: Unknown if Volume not in use
    Old->>Old: Volume Unmounted/Detached by ShutDown
    Old->>-Karp: EC2 Terminated
    Note left of Karp: Solution A2
    Karp->>Old: Taint out-of-service
    Old->>AT: Taint confirms Volume not in use
    Karp->>Karp: Wait until taint seen (~5s)
    destroy Old
    
    Note right of NN:  ~1m delay (EC2 Termination)*
    Note left of Karp: 5. Delete Finalizer
    par 
    
    Karp-->>Old: Remove Finalizer
    and
    
    AT->>NN: EC2 Attach Volume
    NN->>AT: EC2 Attached Volume
    AT->>-NN: Your Volume is Ready
    NN->>-NN: Pod Running
    end

G: Footnotes

Footnotes

  1. From my testing via the EBS CSI Driver, EBS volumes typically take 10 seconds to detach and 5 seconds to attach. But there is no Attach/Detach SLA.

  2. Complicated stateless diagram.

  3. The CSI controller pod actually consists of multiple containers. Kubernetes-maintained 'sidecar' controllers and the actual csi-plugin that cloud-storage-providers maintain. Relevant to this document are the external-attacher and ebs-plugin containers. The external-attacher watches kubernetes volumeattachment and makes remote procedural calls ebs-plugin to interact with EC2 backend to make sure volumes get attached.

  4. EBS CSI Node service is called by the node's Kubelet's Volume Manager twice after a volume attachment. Once to format the block device and mount filesystem on a node global directory, and a second time to mount on the pod's directory.

  5. Note, this is a hard-coded "forced-detach" delay in the KCM AttachDetach Controller, which can be disabled. If disabled this delay is infinite, and Kubernetes will never call ControllerUnpublishVolume, requiring customer to manually delete volumeattachment object. As of June 2024, SIG-Storage wants to disable this timer by default in Kubernetes v1.32.

  6. For certain storage providers, this delay in pod restart can prevent potential data corruption due to unclean mounts. (Is this true for EBS? I'm skeptical that these data corruption issues exist for non-multi-attach EBS volumes. EC2 does not allow mounted volumes to be detached, and most Linux distributions unmount all filesystems/volumes during shut down. Finally, when the volume is attached to the new node, I believe CSI Node pods run e2fsck before formatting volumes and never forcefully reformat volumes)