-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does node 20.x fully supports cgroups v2 ? facing memory celling problem Kubernetes cluster #52478
Comments
In theory yes. |
Hi @mcollina, |
Node v20.x includes the libuv version with the necessary fix as far as I understand #47259. However I have not tested it. |
We have the same issue and are switching back to cgroup v1 |
cc @santigimeno |
After upgrading on cgroup v2 on our OKD cluster we don't have any issue when running Node 20.13.1 apps. Only Node 18 cause issue that were mitigated using max_old_space_size |
Hi. This is on a node using cgroupv1
while this is on a cgroupv2 node
Is something else going on here? |
@mcollina can you reopen this issue? |
@ben-bertrands-hs By a chance, are you running on alpine linux? AFAIK its node.js bulid links on alpine-supplied libuv, which, until recently, was fairly old one. |
@rescomms-tech yes, we are running alpine linux (node:20.17.0-alpine) doing a find on the container only returns these:
Checking |
Version
20.x
Platform
No response
Subsystem
No response
What steps will reproduce the bug?
Is #47259 fixed on node js 20.x ?
How often does it reproduce? Is there a required condition?
No response
What is the expected behavior? Why is that the expected behavior?
Node js being aware of memory and CPU available via cgroups v2
What do you see instead?
Pods running node js on my cluster running out of memory.
Additional information
No response
The text was updated successfully, but these errors were encountered: