Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocate reasonable memory for counts step on computing clusters #73

Open
Nick-Eagles opened this issue Jun 15, 2021 · 1 comment
Open
Labels
enhancement New feature or request
Milestone

Comments

@Nick-Eagles
Copy link
Member

SGE, SLURM, and JHPCE configuration requests 140GB of memory for the CountObjects process. While an amount of memory this high has been needed before for large experiments, 140GB is typically unreasonable and excessive. The best solution for this is likely to use a closure to scale the memory request by number of samples (dynamically), potentially with a hard maximum. This may require some testing to determine an appropriate approximate function for memory vs. number of samples.

@lcolladotor
Copy link
Member

We could start by having a more reasonable default though, say 50GB. Then even if we request more memory than we need to, we might waste less of it.

Nick-Eagles added a commit that referenced this issue Jun 16, 2021
…the huge 140GB. This is a temporary improvement before fully addressing #73
@lcolladotor lcolladotor added the enhancement New feature or request label Nov 30, 2023
@lcolladotor lcolladotor moved this to Todo in SPEAQeasy plans Nov 30, 2023
@lcolladotor lcolladotor added this to the bioc v3.22 milestone Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Todo
Development

No branches or pull requests

2 participants