Skip to content
This repository has been archived by the owner on Jan 9, 2023. It is now read-only.

Increase inotify limits #756

Closed
dippynark opened this issue Feb 22, 2019 · 1 comment
Closed

Increase inotify limits #756

dippynark opened this issue Feb 22, 2019 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dippynark
Copy link
Contributor

dippynark commented Feb 22, 2019

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: cadvisor fails to watch container directories due to hitting inotify limits

... kubelet.go:1344] Failed to start cAdvisor inotify_add_watch /sys/fs/cgroup/memory/kubepods/pod<pod_id>/<container_id>: no space left on device

What you expected to happen: /proc/sys/fs/inotify/max_user_watches should be higher to reduce this impact of a leak in the inotify library used by cadvisor. The problem is described further in the issues here and here. However this is still an upstream issue.

How to reproduce it (as minimally and precisely as possible): Spin up a tarmak cluster and create and delete many pods on a particular node until limit is reached.

Environment:

  • Kubernetes version (use kubectl version): 1.12.3
@jetstack-bot jetstack-bot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 22, 2019
@simonswine
Copy link
Contributor

I think that was covered by #757

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants