Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controller pod volumemodifier container sometimes pegs CPU #2030

Open
danports opened this issue May 4, 2024 · 4 comments
Open

Controller pod volumemodifier container sometimes pegs CPU #2030

danports opened this issue May 4, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@danports
Copy link

danports commented May 4, 2024

/kind bug

What happened?
Occasionally the volumemodifier container in one of the EBS CSI controller pods starts using the maximum CPU possible (stuck in some kind of busy wait perhaps?). Here's a chart showing CPU usage for volumemodifier containers in a cluster over 3 days - there are only 2 vCPUs on the nodes so the container is basically using up all of the CPU time available:
image
You can see from the chart that the lease likely switched from one container to another a couple of times but the container continued pegging the CPU regardless of which pod it was running in.

What you expected to happen?
Container should be using a miniscule amount of CPU time like it usually does.

How to reproduce it (as minimally and precisely as possible)?
Good question! I've only noticed this problem fairly recently - I can try to track down when it started.

Anything else we need to know?:
I have two replicas of the controller running, but usually only one of the replicas has a volumemodifier container running out of control, probably the one with the active lease. The logs for the container look like this:

2024-05-04 17:44:13.668	I0504 21:44:13.668099       1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:13.666	I0504 21:44:13.665958       1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:13.529	I0504 21:44:13.529067       1 reflector.go:351] Caches populated for *v1.PersistentVolume from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:13.526	I0504 21:44:13.526610       1 reflector.go:325] Listing and watching *v1.PersistentVolume from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:13.249	I0504 21:44:13.249540       1 reflector.go:351] Caches populated for *v1.Lease from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:13.229	I0504 21:44:13.228949       1 reflector.go:325] Listing and watching *v1.Lease from pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229
2024-05-04 17:44:12.131	W0504 21:44:12.131235       1 reflector.go:462] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: watch of *v1.Lease ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
2024-05-04 17:44:12.131	W0504 21:44:12.130947       1 reflector.go:462] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
2024-05-04 17:44:12.131	W0504 21:44:12.130939       1 reflector.go:462] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
2024-05-04 17:42:35.438	I0504 21:42:35.438669       1 reflector.go:800] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Watch close - *v1.PersistentVolumeClaim total 0 items received
2024-05-04 17:38:53.666	I0504 21:38:53.666134       1 reflector.go:800] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Watch close - *v1.PersistentVolume total 2 items received
2024-05-04 17:37:14.433	I0504 21:37:14.433435       1 reflector.go:800] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Watch close - *v1.PersistentVolumeClaim total 5 items received
2024-05-04 17:35:48.458	I0504 21:35:48.458834       1 reflector.go:800] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Watch close - *v1.Lease total 1247 items received
2024-05-04 17:35:18.092	I0504 21:35:18.092294       1 reflector.go:377] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: forcing resync
2024-05-04 17:35:18.054	I0504 21:35:18.054127       1 reflector.go:377] pkg/mod/k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: forcing resync

It seems like killing the CPU-hogging pod is sufficient to resolve the problem until it recurs.

Environment

  • Kubernetes version (use kubectl version): 1.29.4
  • Driver version: volumemodifier container is running public.ecr.aws/ebs-csi-driver/volume-modifier-for-k8s:v0.2.1@sha256:78c116f223997fa8d074846bf10e1a08cc0b723dc03c7a20900685442b5a3504
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 4, 2024
@AndrewSirenko
Copy link
Contributor

Hi @danports, thanks for bringing this to our attention, this is unfortunate.

In the short term, if you're blocked by this and are on a kOps cluster, consider switching to the upstream volume modification via VolumeAttributeClass feature (which we are trying to get turned on by default in Kubernetes 1.31 so EKS customers can start using it)

In the meantime I'll go setup some clusters with constant volume modifications and try to spot any misbehaving sidecars so we can track this bug down.

Again, thank you! 🙏

@danports
Copy link
Author

danports commented May 6, 2024

Thanks @AndrewSirenko. I'll take a look at the feature you mentioned. This isn't a huge blocker since the issue occurs only infrequently - about 1-2x/month, though I don't have sufficient archived telemetry to confirm the earlier problems are the same issue - and can be worked around by killing the pod. It looks like the problem started in late March, shortly after upgrading to Kubernetes 1.29 from 1.28, which included an upgrade from registry.k8s.io/sig-storage/csi-resizer:v1.4.0 to the current public.ecr.aws/ebs-csi-driver/volume-modifier-for-k8s:v0.2.1 according to the cluster's kOps logs.

@danports
Copy link
Author

danports commented May 6, 2024

Also, anything I can do in the cluster (log verbosity, memory dumps, etc.) to help collect troubleshooting data if/when the issue recurs?

@AndrewSirenko
Copy link
Contributor

Thanks for pointing to the regression occurring between 0.1.4 & 0.2.1, we made a few significant changes in between.

And if you could bump up the sidecar's verbosity to 7 that would be helpful. Thanks a million!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants