You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recently updated calico to 3.29.1 on one of our staging clusters and found that after a few hours there was a clear upward trend in the number of file descriptors held by calico-node pods.
Checking on a running instance after a couple days, we found that the calico-node -felix process had nearly 6000 file descriptors according to lsof, nearly all of which were like the following:
Deleting that pod made dropped the fds although the new pod is starting the trend all over again.
Let me know if there is any other debugging data I can provide.
Expected Behavior
A relatively steady state of file descriptors for a calico-node pod.
Current Behavior
A steady increase in open file descriptors.
Possible Solution
Steps to Reproduce (for bugs)
Just deploy calico 3.29.1 afaict
Context
This is ok for now in our staging environment but are worried about going to production this way. It is entirely possible this is due to some weird config on our side but nothing is jumping out at me so far.
Your Environment
Calico version: 3.29.1
Calico dataplane (iptables, windows etc.): iptables
Orchestrator version (e.g. kubernetes, mesos, rkt): kubernetes
Operating System and version: Linux ip-10-213-23-129 6.8.0-1018-aws #19~22.04.1-Ubuntu SMP Wed Oct 9 16:48:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Link to your project (optional):
The text was updated successfully, but these errors were encountered:
We recently updated calico to 3.29.1 on one of our staging clusters and found that after a few hours there was a clear upward trend in the number of file descriptors held by calico-node pods.
Checking on a running instance after a couple days, we found that the
calico-node -felix
process had nearly 6000 file descriptors according tolsof
, nearly all of which were like the following:Deleting that pod made dropped the fds although the new pod is starting the trend all over again.
Let me know if there is any other debugging data I can provide.
Expected Behavior
A relatively steady state of file descriptors for a calico-node pod.
Current Behavior
A steady increase in open file descriptors.
Possible Solution
Steps to Reproduce (for bugs)
Context
This is ok for now in our staging environment but are worried about going to production this way. It is entirely possible this is due to some weird config on our side but nothing is jumping out at me so far.
Your Environment
Linux ip-10-213-23-129 6.8.0-1018-aws #19~22.04.1-Ubuntu SMP Wed Oct 9 16:48:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: