-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide option to have variable size internal queues which are persisted #2606
Labels
Comments
👍 |
Ditto'ing the comment on #2605, it would be great if this built in buffering mechanism was based on an abstracted contract which could be implemented via an extensible framework through plugins |
11 tasks
We will be closing this as the first iteration of Persistent Queues in Logstash will be variable length. Follow progress in #5638 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
As mentioned in #2605, Logstash uses in-memory bounded queues between pipeline stages (input to filter, filter to output) to buffer events. The size of the queue is restricted to 20 and is non-configurable. This works well in practice because:
In Logstash deployments which require high throughput and resiliency, users typically deploy message brokers such as Redis, RabbitMQ or Apache Kafka. Essentially, this breaks the pipeline into two stages:
This helps cases when there is a mismatch in cadence between the shipping and the relatively expensive processing stage. The queue buffers events outside of the application machines and does not add back pressure to the source.
Building on the work of persistent queues (#1939), we plan to offer a built-in alternative to an external message broker. By adding a variable queueing option to Logstash which is persisted to disk, we will provide an option to remove the dependency on external queues. This will make it operationally easier to deploy and maintain Logstash instances. We will provide APIs to monitor and interact with the queues.
The text was updated successfully, but these errors were encountered: