Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate increased memory usage #90

Closed
FoxxMD opened this issue May 5, 2022 · 2 comments
Closed

Investigate increased memory usage #90

FoxxMD opened this issue May 5, 2022 · 2 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@FoxxMD
Copy link
Owner

FoxxMD commented May 5, 2022

Coinciding with database support I have seen an increase in memory. Whether this is due to typeorm and poor coding on my part is not clear yet...

Most likely my scenario is a worst case since I am doing the most with CM -- 70+ subreddits across 20+ bots with a few doing image comparison (using sharp).

Memory snapshots from init, after loading all configs, and a few minutes after running show tiny deltas (a few MB) but RSS on linux shows 100-200MB. Additionally, eventually there is a huge increase in usage ballooning to ~1.4GB. The memory snapshot still stays tiny which makes me think this is non-heap. Some speculation:

  • Buffers being stored as globals from typeorm unit of work tracking things?
  • Memory fragmentation caused by allocator with Sharp
  • Seems unlikely, but snoostorm stores string ids of every seen activity in a Set
  • Other objects kept around for too long or not cleared out at the server/client express level such as log maps
    • Review usage of winston streams and streams at top-level to make sure nothing is being stored indefinitely

Also need to do more research to verify memory usage is actually non-heap and not something obvious im missing in memory snapshot or explained by some node utility i haven't used yet (like process.memoryUsage())

@FoxxMD FoxxMD added bug Something isn't working help wanted Extra attention is needed labels May 5, 2022
@FoxxMD
Copy link
Owner Author

FoxxMD commented Jul 20, 2022

Running multiple instances with bots/subreddits separated by common functionality like so:

  • isolated bots that use image comparison (two separate instances)
  • isolated bot with very high volume
  • isolated bots with high actioned event occurrences and delayed activities (two separate instances)

In all cases memory ballooning did not seem to correlate to any specific behaviors. It was still occurring across all instances and at random -- both for when it occurred and how often it occurred.

Added memory monitoring in 4196d2a and ce99009 that invokes process.memoryUsage() at an interval and sends metrics to influx. After doing this all instances stayed within normal memory usage. 🤷

There is no obvious root cause and no rhyme or reason to ballooning which leads me too....

GC is lazy

More than a few sources I've read point to node's GC being greedy (using as much memory as is free to it) and lazy (waiting until last moment possible to collect).

https://blog.heroku.com/node-habits-2016#7-avoid-garbage

Node (V8) uses a lazy and greedy garbage collector. With its default limit of about 1.5 GB, it sometimes waits until it absolutely has to before reclaiming unused memory. If your memory usage is increasing, it might not be a leak - but rather node's usual lazy behavior.

https://devcenter.heroku.com/articles/node-memory-use#tuning-the-garbage-collector

Versions of Node that are >= 12 may not need to use the --optimize_for_size and --max_old_space_size flags because JavaScript heap limit will be based on available memory.

https://medium.com/geekculture/node-js-default-memory-settings-3c0fe8a9ba1

Default Memory Limits > Node 13.x is 4GB


While not a solution for the root cause it should be possible to rein in memory usage by using --optimize_for_size and --max_old_space_size flags:

node --optimize_for_size --max_old_space_size=512 src/index.js run

or as env

NODE_OPTIONS=--max_old_space_size=512

(optimize_for_size not available for NODE_OPTIONS)

Will try this with docker containers to see if it makes a difference and if it does will add guidance to docs for now.

@FoxxMD
Copy link
Owner Author

FoxxMD commented Aug 23, 2022

max_old_space_size has been in use in the Docker image for the past month and has worked nicely with no unexpected behavior. This definitely doesn't solve the issue but it does provide a simple fix...

I'm going to close this for now. If it crops up again will re-open.

@FoxxMD FoxxMD closed this as completed Aug 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant