-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High Memory Usage when Using ZNC Clientbuffer Module #309
Comments
Can you try running it with You're right that we cap how many messages we store in memory and how many messages we render to the scrollable. I'll try to think if there's anywhere we allow memory to grow unbounded. |
Thank you for the suggestion. The issue with running under Edit: |
Running with a There are apparently ~88MB of Vulkan allocations at peak memory usage (where
|
Ran |
We really need to recreate the above It's worth noting that Halloy used Can you please test w/ #317 as this updates iced to include that fix as well as reduces some unnecessary text allocations? |
Unfortunately x86_64 grows significantly slower than aarch64. And the x86_64 machine has 4x as much memory as the aarch64 machine, so it doesn't slow down as quickly (it was running at 1GB without the stalls that I usually see at 400MB on aarch64). Running under When I get some time I'll spend some effort trying to get Running #317 on aarch64 now, will report back after giving some time for memory use to grow (or not). |
#317 still results in high memory usage on aarch64, unfortunately. To put some numbers on the growth rate seen: memory grew to ~450MB twice since my last post (once extra because I initially ran the wrong executable 😓), which was sufficient to get occasional stalls. In that same time x86_64 has grown to ~260MB of memory usage (not running #317, just running |
@andymandias So you are experiencing high CPU usage as well, I gather? Leaked memory should not generally have an impact on performance, unless you are running out. If there is a high CPU usage, then we know it's most likely not a memory leak. Something is trying to perform a lot of work. Could you share a bit more about your hardware? Just to discard any potential driver / OS issues. |
@hecrj I did not pay close attention to the CPU usage since this machine is frequently RAM starved, and Quickly grabbing potentially relevant specs from neofetch for the aarch64 machine (which grows in memory use fairly quickly):
And for the x86_64 machine (which grows in memory usage as well, but growth is significantly slower):
If there's anything missing there I'll be happy to provide it. |
Some progress on getting Running on x86_64 under I'm just picking out what seems like it might be useful from the reports, but I'm pretty new to |
A short update to report results from using |
Going to close this now that #340 is merged. Thanks again for the extensive, detailed work from everyone to resolve this! |
When using Halloy connected to ZNC with the Clientbuffer module memory usage grows continually during use. When otherwise running Halloy I see memory usage start around 100MB and grow up to around 150MB, but when connected to ZNC with the Clientbuffer module memory grows unbounded. In around an hour memory is often up to 250MB and the UI slows down, but memory use will continue to grow until the application stalls and must be force quit. The largest memory use I've seen so far is 3.5GB, but usually I don't let it get that high.
It doesn't matter whether I use the new nickname format or not (i.e. using an
@
in my username to identify the client is not necessary to produce the behavior). It seems to progress faster when two clients are connected to the ZNC bouncer, but it still happens when only one client is connected. Happens on Linux (both x86_64 and aarch64, though it progresses faster on the latter). I tried reproducing it under DHAT (valgrind), but annoying it does not seem to reproduce no matter how long I wait.One thing I have noticed, is that the server buffer gets a lot of
RPL_WHOREPLY
messages in it. I think this is somewhat expected when two (or more) clients are connected, sincelast_who
will not (to my knowledge) reflect theWHO
requests from another client (i.e. all theWHO
polls sent from another client will show up in the server buffer). For some reason this also happens when only one client is connected. Since the server buffer history is capped at 10000 messages (as far as I know) I don't expect it to causing the memory leak, but it's the only different behavior I've been able to spot so far.The text was updated successfully, but these errors were encountered: