You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 9, 2023. It is now read-only.
What happened: cadvisor fails to watch container directories due to hitting inotify limits
... kubelet.go:1344] Failed to start cAdvisor inotify_add_watch /sys/fs/cgroup/memory/kubepods/pod<pod_id>/<container_id>: no space left on device
What you expected to happen: /proc/sys/fs/inotify/max_user_watches should be higher to reduce this impact of a leak in the inotify library used by cadvisor. The problem is described further in the issues here and here. However this is still an upstream issue.
How to reproduce it (as minimally and precisely as possible): Spin up a tarmak cluster and create and delete many pods on a particular node until limit is reached.
Environment:
Kubernetes version (use kubectl version): 1.12.3
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: cadvisor fails to watch container directories due to hitting inotify limits
What you expected to happen:
/proc/sys/fs/inotify/max_user_watches
should be higher to reduce this impact of a leak in the inotify library used by cadvisor. The problem is described further in the issues here and here. However this is still an upstream issue.How to reproduce it (as minimally and precisely as possible): Spin up a tarmak cluster and create and delete many pods on a particular node until limit is reached.
Environment:
kubectl version
): 1.12.3The text was updated successfully, but these errors were encountered: