-
-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous memory leak with console_subscriber
#184
Comments
console_subscriber
Over what time interval was the memory leak observed, and does it eventually become stable? I'm wondering about this because it's possible what you're seeing is not actually a memory leak per se, but memory use: by default, the console subscriber will store historical data for tasks when no clients are connected, so that this data can be played back to the client when it connects. Of course, this uses memory. The historical data that's stored by default includes data for tasks which have completed, based on the assumption that users may wish to see information about what previously happened in the system. This data is retained for a period of time after the task completes, and then is eventually deleted. By default, completed tasks are retained for an hour after the task completes. You can change how long completed tasks are retained for using a builder setting if you're using the Of course, it's also certainly possible that there is also an actual memory leak somewhere (where memory is just never actually deallocated) --- this is still pre-release software, and there might be any number of bugs. :) But, it would be good to isolate that from memory usage to store historical data. Can you try changing the value of |
It would go up about a megabyte, even when no tasks were being spawned, every 10-15 minutes or so. This meant that over-night, it could clock up to 250 MBs more. |
Just though I'd give a little bit of info here as it might be related, However, the instant I connect the console, the memory is freed. What I have also noticed is that if I use the builder and set the retention period instead of using the default this does not occur.
Cargo.toml:
This AWS instance has 2GB of RAM and the program was using 55% of it, the normal amount is about 20MB. When I last checked, it was 1.1GB, after opening the console it dropped back to 23MB, this was over a period of approx. 6 days. |
Looking at it again, this only seems to be part of the problem. Most of them are when I use Tokio's sleep in a loop of some sort but I suppose it could be any type of async task that was spawned that may lead to this not just sleep.
|
Do not record poll_ops if there are no current connected clients (watchers). Without this `Aggregator::poll_ops` would grow forever. Follow up to tokio-rs#311 and fix for these two: - tokio-rs#184 - tokio-rs#500
I think this is |
When calling
console_subscriber::init()
there is a big memory leak that seems to grow faster the more futures are spawned.I ran this application through valgrind and it said that all the blocks were still reachable (the pointers still exist) so I'm guessing it's some sort of Circular ref-counted data structure that isn't using Weak properly (or something equivalent).
When I commented out the
console_subscriber::init()
line the leak completely went away (now idling at 3mb when it would go to 252mb when running for a couple hours with the subscriber)The text was updated successfully, but these errors were encountered: