-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory-efficient posterior generation #263
Conversation
This branch stopped memory from being used up during the for loop, but my job was killed due to OOM sometime after. I'm running on 140 gb, which should be plenty |
@jg9zk thanks for reporting. Any chance you could post the last few lines of the log file? |
cellbender:remove-background: Working on chunk (377/383) |
OOM seems to occur in either line 549 or 550 of posterior.py in commit 6fd8c23 (noise_offset_dict creation) |
I was able to reproduce that same behavior @jg9zk |
* Speed up MCKP _gene_chunk_iterator() by a factor of 100
I tried commit 7fd0ac and it completed! However, it looks like counts are being added to the count matrix instead of removed, but I'll open a separate issue about that. |
* Add WDL input to set number of retries. (#247) * Move hash computation so that it is recomputed on retry, and now-invalid checkpoint is not loaded. (#258) * Bug fix for WDL using MTX input (#246) * Memory-efficient posterior generation (#263) * Fix posterior and estimator integer overflow bugs on Windows (#259) * Move from setup.py to pyproject.toml (#240) * Fix bugs with report generation across platforms (#302) --------- Co-authored-by: kshakir <github@kshakir.org> Co-authored-by: alecw <alecw@users.noreply.github.com>
It has become apparent that something during the posterior generation process in v0.3.0 is gobbling up way too much memory, more than previous versions. See #251 #248
Conceptually, in v2:
Conceptually, in v3:
This refactor allows us to do a whole lot more. But it also involves computing and saving the full posterior, which was not attempted in v2. While it is perfectly doable (these posterior h5 files are usually less than 2GB), it seems it needed to be done a bit more carefully.
I think the extension of python lists left around objects in memory (by creating references to them) that I did not intend.
Adopting another strategy: keep a python list of (sparsified info as) torch tensors. Append tensors to the lists each minibatch. Concatenate them once and for all at the end. All these tensors are cloned from the originals, detached, and kept in cpu memory.
Closes #248
Closes #251