Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue total data size limit #253

Merged
merged 1 commit into from
Oct 24, 2016
Merged

Queue total data size limit #253

merged 1 commit into from
Oct 24, 2016

Conversation

lomereiter
Copy link
Contributor

@nosepy @sambrightman does it make sense to still keep queue length limit with this change?

@sambrightman
Copy link
Collaborator

It might make sense to require both to fail, I guess? Probably not.

To be honest the rationale for data size limit isn't completely clear to me, so I find it hard to comment. It seems like if chunk limit is filled quickly then your output is too slow anyway, so you are only postponing the problem. Lots of small chunks should also be output quickly, so you won't be waiting long.

Perhaps lots of small chunks queued behind one long-running one with a lower number? Or maybe I misunderstood the use-case.

@nosepy
Copy link
Contributor

nosepy commented Oct 3, 2016

@sambrightman I observed this scenario where a large, long-running chunk was followed by many small chunks which filled up the queue quickly.

The queue size limit is no longer necessary if the memory limit is always enforced.

The other idea would have been to make the memory limit an optional parameter and keep the existing auto-tuning mechanism without memory limit as long as no memory limit is specified via command line. Through the command line parameter output-buffer-size the limit is switched to memory limit without queue size limit.

If the queue size limit is removed your solution looks good to me.

@sambrightman
Copy link
Collaborator

Right, if it's a realistic scenario even with fast output then I think the memory limit makes sense, and probably no need to keep both. I had assumed that even one such long-running chunk wouldn't be that long (a few seconds), so need to complicate the mechanism.

Perhaps the default memory limit can also be somewhat auto-sized? 2 * threads * buffer size. I think the short chunks only occur when the buffer size spans over a contig or so?

@sambrightman
Copy link
Collaborator

Any chance we can get this merged and release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants