Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: updated iced to master #340

Merged
merged 2 commits into from
Apr 19, 2024
Merged

feat: updated iced to master #340

merged 2 commits into from
Apr 19, 2024

Conversation

casperstorm
Copy link
Member

@andymandias could you test this branch re #309.
@hecrj updated iced with iced-rs/iced#2389.

@andymandias
Copy link
Collaborator

Can do. Should I test with Vulkan disabled to rule out any issues from there? I can do both, but it takes a bit longer to get a clear signal when Vulkan is disabled.

@hecrj
Copy link
Contributor

hecrj commented Apr 17, 2024

You should update to a05b8044a9a82c1802d4d97f1723e24b9d9dad9c. I fixed a worker thread panic related to the backpressure changes.

@andymandias
Copy link
Collaborator

From initial testing (with tiny-skia backend to exclude Vulkan issues), growth appears slower but hasn't shown signs of stopping yet. I'm going to continue to test to verify that it's doesn't just have a higher steady-state memory usage than I'm expecting (as well as continue to test configurations such that it can be more reliably/quickly tested; I think I'm getting close to reproducing the issue without ZNC involved).

I'm reporting back a little bit early because, for the first time since tracking this issue, I occasionally see the memory decrease while in use. It seems like when I open the server buffer, for one of the servers that has a high rate of WHO polling, that is when memory usage might decrease; but that doesn't happen every time I open the server buffer, and the memory use continues to rise afterward.

@tarkah
Copy link
Member

tarkah commented Apr 18, 2024

I'd suspect a lot of this comes down to the allocator used by default on Linux with Rust and allocated pages not getting released back to the kernel.

You can try compiling Halloy with jemalloc which prioritizes reducing memory fragmentation which means it returns unused pages of memory back to the kernel much more aggressively.

I think this is why memory reported by top can be a lot higher than actual usage you see reported by heaptrack.

Your report of changing channels and seeing memory decrease backs this up as there'll be a lot of text allocations that get freed up and reallocated when doing this, which may cause a lot of unused memory to finally get freed back to the kernel.

@andymandias
Copy link
Collaborator

Overnight test and it looks like memory use has just settled at a higher level than I was expecting. 🥳 And @tarkah's explanation for the behavior makes perfect sense to me as to why memory use would appear larger than it actually is. I think we can consider this as solving #309, and I can look into whether I need to open issues for Vulkan further upstream. Thank you everyone for looking into this! I think we got a lot of blood out of that stone 🙂

@casperstorm casperstorm merged commit 602cafa into main Apr 19, 2024
1 check passed
@casperstorm casperstorm deleted the feat/update-iced branch April 19, 2024 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants