Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nsqd: option to bound disk footprint #549

Open
mreiferson opened this issue Feb 25, 2015 · 18 comments
Open

nsqd: option to bound disk footprint #549

mreiferson opened this issue Feb 25, 2015 · 18 comments

Comments

@mreiferson
Copy link
Member

In a conversation with @cespare on IRC, it would be useful to have a configuration option to bound the on-disk footprint of a given DiskQueue (topic/channel).

I propose we add a configuration option/flag --max-bytes-per-<INSERT GOOD NAME HERE> that will denote the maximum size for a DiskQueue. This means that you will effectively configure this at the nsqd level, but it will apply to an individual topic / channel's DiskQueue.

Since DiskQueue is chunked by --max-bytes-per-file (and it probably doesn't make sense to require this new maximum to be divisible by the chunk size), the easiest implementation would ephemerally track the aggregate size of those files (rounding down), and unlink the oldest file (updating metadata as appropriate).

@mreiferson
Copy link
Member Author

thoughts @jehiah ?

@jehiah
Copy link
Member

jehiah commented Feb 25, 2015

To make sure i follow, you are suggesting that past this limit nsqd throws away messages from the oldest file, not that it refuses new PUB's right?

@cespare
Copy link
Contributor

cespare commented Feb 25, 2015

Yes.

@jehiah
Copy link
Member

jehiah commented Feb 25, 2015

Interesting, i can think of some desire to bound disk size, but i would normally equate that to wanting backpressure.

It feels like throwing away messages on disk would be more natural based on the count of messages on disk. Would that target the same need here, or is there a demonstrable difference in use case between the two approaches?

@cespare
Copy link
Contributor

cespare commented Feb 25, 2015

@jehiah Here's the IRC conversation I had with @mreiferson: https://gist.github.com/cespare/a353b739e4511842aeb1

I would prefer to bound disk space by bytes, rather than by number of messages, since I know how much disk I would like to be made available to NSQ. (To be honest that's what I'd want for the size of the in-memory channel bounds, too, which are currently specified with -mem-queue-size.)

@mreiferson
Copy link
Member Author

It feels like throwing away messages on disk would be more natural based on the count of messages on disk. Would that target the same need here, or is there a demonstrable difference in use case between the two approaches?

The only argument I can think of for using count would be consistency with the existing --mem-queue-size flag, but...

To be honest that's what I'd want for the size of the in-memory channel bounds, too, which are currently specified with -mem-queue-size

Yea, me too 😦 - it makes the most sense operationally.

Assuming we're all on the same page that this feature is reasonable from an operational perspective, I think my only real concern is for it to be as future-proof as possible. --mem-queue-size is, in theory, going away, which makes me less inclined to use "count" for this new option simply for the sake of consistency today. Then we're left with bytes (which I think is more appropriate from the user's perspective anyway), so I just want to think of a name/semantics that makes sense in today's world and the potential world of tomorrow, ideally.

p.s. the other thing I forgot to mention was that this new config option would be disabled by default (i.e. retain the current "infinite" behavior).

@jehiah
Copy link
Member

jehiah commented Feb 25, 2015

--max-disk-size ?

@jehiah
Copy link
Member

jehiah commented Feb 25, 2015

@mreiferson Sleeping on this issue has made me think about the value of a cleaner separation between ephemeral (not persisting topic/channel structure beyond connections) and max-message depths where overflows are discarded (ignoring temporarily weather the most recent or oldest or any is thrown away) either by size or number in memory or on disk or combined. That separation could also open up the opportunity to have a backpressure setting to determine either to refuses new PUBs (like our current disk write errors) or discards messages beyond set limits. (I haven't completely thought through how these do or don't play nice with planned future disk storage changes)

@cespare I mention message count as a limit as there are several spots where i would find that more useful than byte size limits. We also use a script like this to drop messages beyond a threshold in some cases (like a dev instance).

@mreiferson
Copy link
Member Author

@jehiah these are interesting ideas - I think there was some IRC discussion recently debating some of this separation. I think the real tricky part is that all of this "configuration" needs to happen at runtime so talking about the potential ways to implement that is important.

Do you feel like this conversation needs to happen as it pertains to this issue (a knob to bound disk footprint)? I lean towards these being separate.

I do like the idea of being able to use either of size/count to configure these options, though. That does have some relevance to this discussion (and resolves the potential inconsistency we were about to introduce).

@mreiferson mreiferson changed the title nsqd: option to bound a diskqueue nsqd: option to bound disk footprint Aug 30, 2015
@earwin
Copy link

earwin commented Sep 2, 2016

I think if you don't give user an option, proper way is to stop accepting new messages with an error, once the limit is hit.
That is exactly what happens already, when runaway queue eats all available disk space.
Thus, we get better behavior, while preserving existing contract.

@judwhite
Copy link
Contributor

judwhite commented Sep 4, 2016

@earwin I agree. We're okay with getting a failed PUB, tossing out old messages would not be desirable. Both use cases seem valid.

@AshishKumarGoel
Copy link

I would be in the favor to stop accepting new messages. Deleting old messages is not desirable.

@elvarb
Copy link

elvarb commented Oct 19, 2016

The option of deleting old messages and the option of stop accepting new messages both have their use cases.

@earwin
Copy link

earwin commented Oct 19, 2016

There are two separate concerns: functional (how application wants to deal with overflows), and operational (what happens to the machine when overflow occurs).

Applications might want to delete old ones, new ones, low priority ones, every second ones, ... — supporting this is a long and painful road, which I'm not sure nsq should take.

As for operational concerns — we have a de-facto contract: nsq eats up disk space, stops accepting new messages.
This is an error, exception, failure, but it should be properly handled.
Right now nsq 1) kills the whole box it is running on (not everyone runs in a cloud-y container-y environment, so this is an issue)
2) cannot create its own files it needs to manage existing queue properly.

Rejecting incoming messages when hitting disk usage limit breaks nothing and fixes two points above.
It is important to judge by disk usage, as opposed to number of messages in the queue; as the queue is allocated and freed in a block-by-block manner.

@elvarb
Copy link

elvarb commented Oct 19, 2016

I see two different use cases with different requirements.

  1. There is a local NSQd logging/metrics queue that is written to and then NSQ_to_NSQ is used to ship from that local queue to a remote queue. If the connection is lost you do not want the server to go down because the disk got full, the priority is for the server to be up. Loosing the oldest messages is preferable in the metrics use case and loosing the newest messages is preferable in the logging use case because you want the logs to debug the problem.
  2. The remote NSQd queue server for that server should not loose messages, so if space is full it should stop taking in new messages so that the local NSQd instances should queue up messages.

So we have three options

  • Max X size, remove oldest (rolling data files where the oldest file is removed or emptied)
  • Max X size, remove newest (basically accept new messages and output to null)
  • Max X size, stop accepting messages

@earwin
Copy link

earwin commented Oct 19, 2016

Option to drop newest instead of rejecting them is stupid (for a lack of better word). A client is perfectly capable of dropping the message itself after rejection (or maybe put into another queue, or a gazillion other options), it's not nsq's place to decide.

While you see two usecases, there are much more.
E.g. for some of our queues it is better to drop every other message, and others would benefit from dropping on priority.
And experience tells, that more users will have more usecases and ask for more dropping modes.
This is exactly why I'm trying to focus this issue on operational point of view.
nsqd should not kill the machine it is running on

@elvarb
Copy link

elvarb commented Oct 19, 2016

I agree completely

This work would definitely make me happy regarding this problem. #625

@martin-sucha
Copy link

An option to bound the disk queue size would be useful for us. We have staging and production consumers. Currently if staging stops consuming, all the disk space could be consumed by the staging channel, which would affect production as well.

Are there any plans for implementing the limit for disk usage? What changes would be necessary to implement a global option to limit channel/topic disk size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants