-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an option to send multiple log events as a record #12
Comments
@PettitWesley Any update here? |
@diranged I started working on this back in December for the core TL;DR no promises but this is one of the next higher priority things on our backlog |
@diranged We have implemented this feature in this plugin (the go version) and plan to release it Mon-Wed next week. I had wanted to have this implemented in the C plugin since that's supposed to be our focus now since its higher perf, but we have limited resources and Go is very fast to implement a change in. So we chose to implement it here to unblock you. Hopefully this works for your use case. |
@PettitWesley does the plugin work for input into fluent? So if I want to buffer/source logs from a kinesics stream as an input, then forward them on? |
According to the Kinesis Firehose docs:
The throughput limits for a stream can be measured in records per second or MB/s. The default limits for most regions are 1000 records per second and 1 MiB/s. What this means is that if you make a request with 500 records, and the total payload is less than 4MiB, you're not fully using the available throughput of the stream.
Right now, we send 1 log event per record- and in practice, log events are usually quite small. This means that in most cases the throughput of the stream is under-utilized.
This is because for some Firehose destinations- like ElasticSearch- each record needs to be an individual log event.
However, many will use Firehose to send to S3, and with S3, multiple log events can be sent in a single record if they are newline delimited.
The text was updated successfully, but these errors were encountered: