Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAM leak to the MeshCentral server #6179

Open
sheshko-as opened this issue Jun 15, 2024 · 56 comments
Open

RAM leak to the MeshCentral server #6179

sheshko-as opened this issue Jun 15, 2024 · 56 comments
Labels

Comments

@sheshko-as
Copy link

Describe the bug
During operation, there is a sharp consumption of RAM until it runs out. I increased the amount of memory: it was 4 , then 8, now 16. Increasing the amount of memory does not help. I checked on a dedicated server: when the memory runs out, the service is running, but the memory is at the limit. I checked on the VPS server, the service restarts when memory runs out. The problem may occur once a day, perhaps once every three days, but no patterns have been found.
image
image
image

Information from journalctl:

Jun 14 18:56:41 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/meshcentral.service,task=node,pid=1>
Jun 14 18:56:41 kernel: Out of memory: Killed process 15892 (node) total-vm:28494740kB, anon-rss:15090984kB, file-rss:1304kB, shmem-rss:0kB, UID:0 pgtables:49396kB oom_sco>
Jun 14 18:56:41 systemd[1]: meshcentral.service: A process of this unit has been killed by the OOM killer.
Jun 14 18:56:42 systemd[1]: meshcentral.service: Failed with result 'oom-kill'.
Jun 14 18:56:42 systemd[1]: meshcentral.service: Consumed 4h 57min 33.871s CPU time.
Jun 14 18:56:52 systemd[1]: meshcentral.service: Scheduled restart job, restart counter is at 1.
Jun 14 18:56:52 systemd[1]: Stopped MeshCentral Server.
Jun 14 18:56:52 systemd[1]: meshcentral.service: Consumed 4h 57min 33.871s CPU time.
Jun 14 18:56:52 systemd[1]: Started MeshCentral Server.

Server Software (please complete the following information):

  • OS: Ubuntu 22.04
  • Virtualization: VPS
  • RAM: 16Gb
  • vCPU: 4 core 2,2 Ghz
  • SSD: 20 Gb
  • Network: WAN
  • Version: 1.1.24
  • Node: 18.20.3
  • MongoDB:
    "version": "7.0.11",
    "gitVersion": "f451220f0df2b9dfe073f1521837f8ec5c208a8c",
    "openSSLVersion": "OpenSSL 3.0.2 15 Mar 2022",

Client Device (please complete the following information):

  • Device: PC
  • OS: Windows 10
  • Network: Remote over WAN
  • Browser: Google Chrome, Yandex, Opera GX
  • MeshCentralRouter Version: 1.8.8795

Remote Device (please complete the following information):

  • Device: PC
  • Quantity: 1500 devices
  • OS: Windows 10 Pro - 22H2
  • Network: Remote over WAN
  • Current Core Version (if known): Dec 9 2022, 3840084365

Your config.json file

{
  "settings": {
    "cert": "XXXXXX",
    "MongoDb": "mongodb://127.0.0.1:27017/meshcentral",
    "WANonly": true,
    "autoBackup": {
      "backupIntervalHours": 24,
      "keepLastDaysBackup": 30,
      "zipPassword": "XXXXXX",
      "webdav": {
        "url": "XXXXXX",
        "username": "XXXXXX",
        "password": "XXXXXX",
        "folderName": "XXXXXX",
        "maxFiles": 30
      }
    }
  },
  "domains": {
    "": {
      "title": "XXXXXX",
      "title2": "XXXXXX",
      "hide": 5
    }
  },
  "letsencrypt": {
    "email": "XXXXXX@XXXXXX",
    "names": "XXXXXX",
    "production": true
  }
}
@sheshko-as sheshko-as added the bug label Jun 15, 2024
@si458
Copy link
Collaborator

si458 commented Jun 15, 2024

something happened between 11 and 12 from the looks of the graph

going to sound like a DAFT one, can you disable/remove the autoBackup and restart and monitor?

and the fact it looks like its loading itself over and over again in the pic doesnt look good either?

@sheshko-as
Copy link
Author

going to sound like a DAFT one, can you disable/remove the autoBackup and restart and monitor?

I disabled autobackup, rebooted the server, and watched the server work.

@sheshko-as
Copy link
Author

Sometimes this error appears in the logs:

-------- 6/16/2024, 9:33:59 PM ---- 1.1.24 --------

(node:55552) Warning: An error event has already been emitted on the socket. Please use the destroy method on the socket while handling a 'clientError' event.
(Use node --trace-warnings ... to show where the warning was created)

but I don't think it's related to the problem

@si458
Copy link
Collaborator

si458 commented Jun 17, 2024

@sheshko-as that issue has been around for about a year,
it first popped up when we had to move to node 14 and upgraded expressjs
havent been able to track down what line is causing it yet but i dont think its effecting you
UNLESS the timestamp of the event is WHEN you notice memory being increased?

@sheshko-as
Copy link
Author

going to sound like a DAFT one, can you disable/remove the autoBackup and restart and monitor?

It didn't help

@sheshko-as
Copy link
Author

Server Error Log:
-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

<--- Last few GCs --->

[89911:0x5ff48b0] 17700692 ms: Mark-sweep 4047.0 (4138.1) -> 4034.2 (4141.1) MB, 2816.1 / 0.0 ms (average mu = 0.346, current mu = 0.030) allocation failure; scavenge might not succeed
[89911:0x5ff48b0] 17705461 ms: Mark-sweep 4050.1 (4141.1) -> 4037.5 (4144.4) MB, 4691.7 / 0.0 ms (average mu = 0.167, current mu = 0.016) allocation failure; scavenge might not succeed

<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

1: 0xb9c310 node::Abort() [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

2: 0xaa27ee [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

3: 0xd73eb0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

4: 0xd74257 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

5: 0xf515d5 [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

6: 0xf63aad v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

7: 0xf3e19e v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

8: 0xf3f567 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

9: 0xf2076a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

10: 0x12e599f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node]

-------- 6/19/2024, 9:02:28 PM ---- 1.1.24 --------

11: 0x17125f9 [/usr/bin/node]

@sheshko-as
Copy link
Author

It also didn't help "Useful config.js settings":
"AgentsInRAM": false,
"AgentUpdateBlockSize": 2048,
"agentUpdateSystem": 1,
"noAgentUpdate": 1,
"WsCompression": false,
"AgentWsCompression": false,

https://ylianst.github.io/MeshCentral/meshcentral/debugging/

@silversword411
Copy link
Contributor

silversword411 commented Jun 26, 2024

Are you using some kind of VPN/proxy between agent and server?

Can you monitor the ws connections between agent and server? By default they will stay up once established for 24hrs...but I've seen VPN and other networking software/proxies prematurely close ws connections.

When mesh realizes its connection is dead it stands up a new connection but I think the old isn't cleaned....memory leak.

@si458
Copy link
Collaborator

si458 commented Jun 26, 2024

@silversword411 im not sure if thats the case?
because my other post here about duplicate agents fixes this exact issue #6205 (comment)
if it notices duplicate connects/agents, it disconnects the first one lets the second one through
altho it could not closing the connect properly i suppose?

@si458
Copy link
Collaborator

si458 commented Jun 26, 2024

we need to find out what you did/happens when the memory starts to climb
did you connect to a certain device?
did loaads of devices connect?
did someone stop a recording session and its saving from memory to file?
was the loads of login attempts?
have u tried switching from mongodb to just nedb or say mysql? incase its a database issue

@sheshko-as
Copy link
Author

sheshko-as commented Jun 26, 2024

Are you using some kind of VPN/proxy between agent and server?

We don't use it, agents connect directly by domain name

@sheshko-as
Copy link
Author

we need to find out what you did/happens when the memory starts to climb did you connect to a certain device? did loaads of devices connect? did someone stop a recording session and its saving from memory to file? was the loads of login attempts? have u tried switching from mongodb to just nedb or say mysql? incase its a database issue

I'm trying to monitor all this, but I don't see any pattern yet. The database is the only thing that I have not changed yet, I will try to change it to postgresql, for example. There are specifics: we have a lot of client computers with a frozen C disk through the Shadow Defender program, as well as a lot of computers that work without disks through one vhd image (for example, one vhd disk can be on 30-40 PCs at once) for these computers, it is worth deleting in the group settings when the computer is offline.

@si458
Copy link
Collaborator

si458 commented Jun 26, 2024

@sheshko-as wow that does sound mad/complex!
i mean you could be duplicating the meshids if you are using VHD images!?
and that could be causing an issue/confusion of the server!
so maybe yeh set a few groups to delete themselves when they disconnect and see if it helps?

@sheshko-as
Copy link
Author

@sheshko-as wow that does sound mad/complex! i mean you could be duplicating the meshids if you are using VHD images!? and that could be causing an issue/confusion of the server! so maybe yeh set a few groups to delete themselves when they disconnect and see if it helps?

All groups that use one VHD image per group are configured to be deleted after the computer becomes offline.

@si458
Copy link
Collaborator

si458 commented Jun 26, 2024

@sheshko-as hmmm and they vanish ok?
this is getting very stange now as i liteally have no idea whats causing the memory leak/increase
you COULD try running node --trace-warnings node_modules/meshcentral and see if that every displays any output before it crashes
it could very well be the --trace-warnings issue,
also you could try using the latest node 20.15.0 as the was push for a fix to do with misleaking messages
nodejs/node#51204

@sheshko-as
Copy link
Author

hmmm and they vanish ok?

yes

@sheshko-as
Copy link
Author

you COULD try running node --trace-warnings node_modules/meshcentral and see if that every displays any output before it crashes
it could very well be the --trace-warnings issue,
also you could try using the latest node 20.15.0 as the was push for a fix to do with misleaking messages
nodejs/node#51204

Okay, I'll do everything, I'll write you back based on the result

@si458
Copy link
Collaborator

si458 commented Jun 26, 2024

@sheshko-as no worries!
in theory the node 20.13+ will fix the Warning: An error event has already been emitted on the socket. Please use the destroy method on the socket while handling a 'clientError' event. messages
(explains why im not seeing them anymore as i moved to node 20)
but i will downgrade my setup to node 18 and monitor see if it increases ram wise too

@sheshko-as
Copy link
Author

@sheshko-as no worries! in theory the node 20.13+ will fix the Warning: An error event has already been emitted on the socket. Please use the destroy method on the socket while handling a 'clientError' event. messages (explains why im not seeing them anymore as i moved to node 20) but i will downgrade my setup to node 18 and monitor see if it increases ram wise too

updated to 20.15, did not help, the problem was repeated today

@si458
Copy link
Collaborator

si458 commented Jun 27, 2024

@sheshko-as which issue sorry? u mean the memory increase/crash?
did you see when it started increasing and if you did anything like connect to a computer?
in theory the An error event has already been emitted should vanish with the latest LTS of node 20

@sheshko-as
Copy link
Author

did you see when it started increasing and if you did anything like connect to a computer?

I'm trying to figure out what action causes uncontrolled RAM growth to begin, but it's not working yet.

@sheshko-as
Copy link
Author

u mean the memory increase/crash?

yes

@sheshko-as
Copy link
Author

sheshko-as commented Jul 2, 2024

I think I found it: the problem is due to relay sessions, for many users, when opening a lot of RDP, several MeshRouter windows open, and not one, as expected. I think the problem occurs when meshcentral users run multiple copies of the MeshCentralRouter application due to an error when clicking the RDP button in the browser. If the user has only one instance of the MeshCentralRouter program running, RAM does not grow. I'll try to find the reason why this is happening.
image
image

@si458
Copy link
Collaborator

si458 commented Jul 2, 2024

OK that's an interesting theory!

So its opening multiple meshcentralrouter sessions that seems to increase the ram on the server side!

I'll have to test myself, sadly I don't have 32 comps that have rdp. But I suppose I could open 32 remote desktops and see if the memory starts increasing!

@sheshko-as
Copy link
Author

do you think a low-quality VPS can cause this kind of RAM leak behavior?

@si458
Copy link
Collaborator

si458 commented Jul 5, 2024

@sheshko-as sorry ive not had chance to test yet, been working on the android app, and also poorly (full of cold/flu again!)
in theory ur VPS u have is already a good spec, 4 core, 16gb ram! so i cant see the VPS being the issue.

@sheshko-as
Copy link
Author

sheshko-as commented Jul 5, 2024

I changed the VPS provider again, to the best one in our country, the problem does not manifest itself for a day.
This problem also no longer occurs: #6127
image

I'll run the tests for a couple more days and let you know the result.

@sheshko-as
Copy link
Author

sheshko-as commented Jul 11, 2024

I think the problem is related to this error: #6127
If I return obj.user._id, then an error occurs when memory leaks: meshuser.js:792
If you delete "obj." in "obj.user._id", the memory leak occurs until it ends on the server.

It was also noticed: on the computers of the meshcentral admins, for some reason, several instances of MeshCentralRouter open when working in the browser and clicking on the RDP connection button. If the administrator notices that several windows are open and closes all but one, there will be a sharp decrease in RAM consumption on the server.

@si458
Copy link
Collaborator

si458 commented Jul 11, 2024

One thing to try is using the master branch, which includes a fix for the other issue and see if the memory leak happens still?

  1. Stop meshcentral.
  2. npm install Ylianst/MeahCentral
  3. Start meshcentral

@sheshko-as
Copy link
Author

I think we've found what's causing the memory leak. For some reason, when users work through the MeshCentral Router RDP links, many copies of the program are sometimes launched MeshCentralRouter.exe . At this point, a very fast memory leak begins, as soon as these windows are closed, the memory leak stops.
Example: one user found minimized windows during a memory leak:
photo_2024-07-25_12-21-48
The moment when these windows were closed:
image

The problem occurs in different users, on different browsers, more precisely, the actions that open the second copy of the program and the rest can not yet be determined.

@si458
Copy link
Collaborator

si458 commented Jul 25, 2024

ok well thats good to know!
somebody else has mensioned recently that there RDP clicks from the web ui to the router dont work,
but also the RDP buttons in the router also dont work?
will need to investigate it next week when im home!

@si458
Copy link
Collaborator

si458 commented Jul 29, 2024

@sheshko-as i cant seem to replicate your findings?
how are you starting the RDP sessions?
when i click the rdp button in the web ui, its opening up the meshcentralrouter correctly, then setting port and opening up the rdp client as expected,
then when i pick a different device in the web ui and same again click the rdp button in the web ui,
its opening the existing meshcentralrouter and then adding another port and opening up another rdp client as expected?

it doesnt seem to be loading up multiple meshcentralrouter.exe instances at all?

but just a RANDOM idea, can u try my version of meshcentralrouter? MeshCentralRouter.zip
its built from the master repo https://github.com/Ylianst/MeshCentralRouter but includes a few fixes and improvements

@sheshko-as
Copy link
Author

Previously, this happens when MeshCentralRouter is inactive for a long time, for example: the first connection via RDP via MeshCentralRouter is active, and the second one is made after one hour, for example, in this case, a second copy of MeshCentralRouter may open. But this is not yet accurate, we are testing it.

@sheshko-as
Copy link
Author

sheshko-as commented Jul 30, 2024

but just a RANDOM idea, can u try my version of meshcentralrouter? MeshCentralRouter.zip

Okay, let's check it out, and I'll write to you based on the test result.

@sheshko-as
Copy link
Author

@sheshko-as i cant seem to replicate your findings? how are you starting the RDP sessions? when i click the rdp button in the web ui, its opening up the meshcentralrouter correctly, then setting port and opening up the rdp client as expected, then when i pick a different device in the web ui and same again click the rdp button in the web ui, its opening the existing meshcentralrouter and then adding another port and opening up another rdp client as expected?

it doesnt seem to be loading up multiple meshcentralrouter.exe instances at all?

but just a RANDOM idea, can u try my version of meshcentralrouter? MeshCentralRouter.zip its built from the master repo https://github.com/Ylianst/MeshCentralRouter but includes a few fixes and improvements

It is very difficult to understand the reason why this is happening. But in the new version that you sent me, the same problem arises.

@si458
Copy link
Collaborator

si458 commented Sep 12, 2024

long time no speak on this issue
plz can you just try this version of meshcentralrouter? Ylianst/MeshCentralRouter#100 (comment)
ive found a bug where when you selected rdp over and over again, it as creating port forwardings over and over again...
when in reality it was suppose to check to see if the was already a port being forwarded and use that port, not create a new port!
commit for fix is here Ylianst/MeshCentralRouter@a99a01d

@sheshko-as
Copy link
Author

long time no speak on this issue plz can you just try this version of meshcentralrouter? Ylianst/MeshCentralRouter#100 (comment) ive found a bug where when you selected rdp over and over again, it as creating port forwardings over and over again... when in reality it was suppose to check to see if the was already a port being forwarded and use that port, not create a new port! commit for fix is here Ylianst/MeshCentralRouter@a99a01d

It is very difficult to find the reason, but we are trying! Of course, we will check your version of MeshCentralRouter, I will write to you based on the result.

@sheshko-as
Copy link
Author

The new version of MeshCetralRouter and the new version of MeshCentral did not help, but the problem with opening multiple copies of MeshCetralRouter seems to have been solved

After upgrading to version 1.1.32, memory leaks began to appear more frequently.

There is nothing in the MeshCentral error log, it is completely empty.

In journalctl:
image
image
image
image

@sheshko-as
Copy link
Author

image
it looks very much like some kind of action is leading to this, but I can't figure out which one

@sheshko-as
Copy link
Author

The problem is exactly in the MeshCeneralRouter, it is related to RDP. We have many servers in different groups with the same name. I also noticed that MeshCentralRouter writes the same group for different RDP connections, although PCs from different groups
photo_2024-10-06_17-02-58
1 line: CCD:WSG
CCD server from WSG group
2 line: CCB: WSG
CCB server, but it is from the WSG2 group, and writes that from WSG
the rest of the lines have a similar problem

@si458
Copy link
Collaborator

si458 commented Oct 6, 2024

@sheshko-as I through, i fixed that bug in ur screenshot?
Have u downloaded the new version?
It was creating ports over n over rather than using one that already exists
Now it uses the port that exists

@sheshko-as
Copy link
Author

@sheshko-as I through, i fixed that bug in ur screenshot? Have u downloaded the new version? It was creating ports over n over rather than using one that already exists Now it uses the port that exists

We have updated MeshCentral to version 1.0.32
MeshCentralRouter up to version 1.8.9021

@si458
Copy link
Collaborator

si458 commented Oct 6, 2024

@sheshko-as huh weird?

So ur still experiencing the bug where it's opening multiple rdp ports!

Will have to look into it!

When u click the rdp, it should open ur meshcentralrouter and then check if the device already has an rdp port open, and if so, use that port not create another port!?

@sheshko-as
Copy link
Author

  1. I have created two groups
    image
  2. I connected via RDP to PC01
    image
  3. I closed RDP on PC 01, MeshCentralRouter also closed immediately
  4. I connected via RDP to PC01 and PC02 at the same time
    image
    The first problem: the PC02 group does not display correctly
  5. I closed RDP only until PC01
    image
  6. Reconnecting via RDP to PC01
    image

@si458
Copy link
Collaborator

si458 commented Oct 7, 2024

thanks @sheshko-as i can replicate BOTH of your issues you are having!

i will have a look tomorrow when i get chance!

this could possibly be the reason WHY you are seeing the RAM leak if its opening port after port after port from the web ui

as this doesnt happen if you use the meshcentralrouter to do the rdp (right click device and click RDP)

also its weird the fact the group name is being shown incorrectly too !?

P.S: can you open this exact issue in the meshcentralrouter repo and link this issue in?
also include the screenshots etc! as i think the bug is with meshcentralrouter and NOT meshcentral itself

@si458
Copy link
Collaborator

si458 commented Oct 7, 2024

@sheshko-as i do believe this commit now fixes the multiple port issue! Ylianst/MeshCentralRouter@e98f76a

basically it was adding the ports over and over again regardless if they already existed!
we already had code in there to handle not opening up multiple rdp ports when you right click n select RDP IN the app
so a simple copy+paste of that code to handle the mcrouter links seemed to of fixed the issue!

please can you try this build and let me know!
MeshCentralRouter.zip

PS: im aware the group names are still not showing correctly, going to look at that tomorrow when im not falling asleep

@sheshko-as
Copy link
Author

When I disconnected from PC01 and reconnected via RDP, the new port did not open
image
But after reconnecting to the RDP of the PC01 computer, windows appear automatically all the time
image
image
They don't stop until you close the MeshCentralRouter

@si458
Copy link
Collaborator

si458 commented Oct 8, 2024

oh wow! ok! erm? weird will have alook into that
as i did notice/spot every so often when i was testing (even before my code change)
that when i clicked the RDP link, the rdp app would open up but the meshcentralrouter wasnt open which seemed odd?

@si458
Copy link
Collaborator

si458 commented Oct 8, 2024

ive just had another look, im not too sure what the issue now is?

when u connect to device 1, it opens meshcentralrouter and then opens the RDP app as expected
then if you connect to that device, and then RDP connect to device 2,
it also opens meshcentralrouter (which is already open) and then opens up the RDP app for device 2

you then do your sessions/control/etc then close out of the RDP apps, and it keeps the tunnels open as expected,
then when you go back into the web ui and pick say device 1 RDP, it opens meshcentralrouter then opens RDP app
and no more duplicate ports!

only 1 thing, IF YOU CLOSE THE RDP APP, MESHCENTRALROUETR WILL CLOSE TOO but ONLY if you connect to a single RDP device, if you then open another RDP device, the close feature gets cancelled
this is beacause of a feature called autoexit which is enabled by default

@sheshko-as
Copy link
Author

  1. I connect via RDP to PC01, PC02
  2. Closing the RDP window from PC01
    image
    image
  3. I connect via RDP to PC01
    image
    image
    perhaps this spam of their windows occurs if you make the MeshCentralRouter window active (select this window by clicking the mouse)

@si458
Copy link
Collaborator

si458 commented Oct 8, 2024

I can't replicate that bug at all?

Will have another look tomorrow and try again.

Are u actually doing any rdp sessions/connected, or are u just leaving the rdp Windows open but not connected

@sheshko-as
Copy link
Author

This problem appeared on the new version that you dropped: 1.8.9046

@sheshko-as
Copy link
Author

Are u actually doing any rdp sessions/connected, or are u just leaving the rdp Windows open but not connected

I leave PC02 RDP open, and close PC01 completely and reconnect via RDP

@sheshko-as
Copy link
Author

sheshko-as commented Nov 21, 2024

I may be wrong, but with the release of the new version, memory leak monitoring has been added:

(node:54843) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [Socket]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit
(node:54843) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [Socket]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

@si458
Copy link
Collaborator

si458 commented Nov 21, 2024

@sheshko-as alot of people are reporting that issue at the moment but on different issues/setups
I haven't been able to track it down or replicate it yet

@PetieM
Copy link

PetieM commented Nov 21, 2024

I may be wrong, but with the release of the new version, memory leak monitoring has been added:

(node:54843) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [Socket]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit (node:54843) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [Socket]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

Confirming I'm seeing this as well running on Debian 11 with Node 18.18.0 and NPM 10.8.2 at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants