-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RUMM-610 Hotfix memory usage when doing intensive logging #185
RUMM-610 Hotfix memory usage when doing intensive logging #185
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when you wait idle for a while after peaking at ~1.5Gb memory, it goes down to normal, correct?
and does that mean that now if we log intensively, SDK may exceed max file size limit with one single file?
ddf2620
to
499b27d
Compare
Good question. I did a test - if we stress the When we stress it too much (
With the optimisation applied in this PR, all remains functional even a long while after excessive logging ends.
SDK starts a new file if the recently used one exceeds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good 👍 there is no memory leak as it goes back to normal after a while: i was afraid there was a deeper problem within filesystem 😅
@ncreated If it's not too much trouble would you mind describing how you tracked it down to |
@hyling I used Allocations instrument. When running the benchmark snippet, you can see that |
…en-doing-intensive-logging # Conflicts: # DatadogSDK.podspec # DatadogSDKObjc.podspec # Sources/Datadog/Datadog.swift
@ncreated Ah thanks for the explanation and the screenshot, they help me realize that debugging symbols for datadog didn't get loaded into Instruments. After I fixed that the Allocations instrument was much more helpful. 😄 |
What and why?
🧪 As reported in #178, the data upload memory leaks fixed in #181 are not solving out-of-memory issue on very intensive logging. This PR adds necessary optimisation to keep the allocations graph flat.
How?
I added this benchmarking code in Example app to stress-test logging:
And noticed crazy high number of system allocations coming from
try purgeFilesDirectoryIfNeeded()
inFilesOrchestrator
. This method is iterating through the list of files in data directory, doing a lot of OS-internal_FileCache
and_NSFastEnumerationEnumerator
allocations (out of our control).Above benchmark results with this allocations graph, leading to out-of-memory crash when iOS process exceeds
1.8GB
limit:To mitigate this impact, I tuned the
FilesOrchestrator
to calltry purgeFilesDirectoryIfNeeded()
only when necessary - if it knows that a new file will be created (vs. each time when asking for a writable file). This keeps the memory graph flat for the same benchmark:IMHO, by doing this, we give the OS enough time to reuse
_FileCache
things in a memory-performant way.Review checklist
- [ ] Feature or bugfix MUST have appropriate tests (unit, integration)