-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement for pileup memory usage #188
Conversation
Codecov Report
@@ Coverage Diff @@
## master #188 +/- ##
==========================================
+ Coverage 86.67% 86.68% +<.01%
==========================================
Files 76 76
Lines 6051 6055 +4
Branches 501 501
==========================================
+ Hits 5245 5249 +4
Misses 305 305
Partials 501 501
Continue to review full report at Codecov.
|
6e2f54d
to
ccc4eb0
Compare
Rebased the patch onto the master branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the investigation and PR! Added a suggestion comment.
Co-Authored-By: alumi <alumi@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for improving that!
LGTM 👍
Thanks for the review! 🙌 |
cljam.algo.pileup
tends to consume a very large amount of memory and in the worst case it ends up with OOME.This PR mitigates the situation by improving memory efficiency for the pileup data structure. The primary changes are as follows:
:seqs-at-ref
/:quals-at-ref
) from the pileup resultIn my small examination, the changes reduced the memory usage by ~45% at peak time with almost no impact on time efficiency:
See here for the detailed raw profiling data for the above result.