Skip to content

Commit

Permalink
a few doc fixes in preparation for 2.4 (#4597)
Browse files Browse the repository at this point in the history
  • Loading branch information
owen-d authored Oct 29, 2021
1 parent 0e21f02 commit 6eab7be
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/sources/best-practices/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Loki can cache data at many levels, which can drastically improve performance. D

## Time ordering of logs

Loki can be configured to [accept out-of-order writes](../configuration/#accept-out-of-order-writes).
Loki [accepts out-of-order writes](../configuration/#accept-out-of-order-writes) _by default_.
This section identifies best practices when Loki is _not_ configured to accept out-of-order writes.

One issue many people have with Loki is their client receiving errors for out of order log entries. This happens because of this hard and fast rule within Loki:
Expand Down
3 changes: 2 additions & 1 deletion docs/sources/clients/fluentd/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,8 @@ Use with the `remove_keys kubernetes` option to eliminate metadata from the log.

### Multi-worker usage

Out-of-order inserts may be configured for Loki; refer to [accept out-of-order writes](../../configuration/#accept-out-of-order-writes). If out-of-order inserts are not configured, attempting to insert a log entry with an earlier timestamp after a log entry with identical labels but a later timestamp, the insert will fail with `HTTP status code: 500, message: rpc error: code = Unknown desc = Entry out of order`. Therefore, in order to use this plugin in a multi worker Fluentd setup, you'll need to include the worker ID in the labels or otherwise [ensure log streams are always sent to the same worker](https://docs.fluentd.org/deployment/multi-process-workers#less-than-worker-n-greater-than-directive).
Out-of-order inserts are enabled by default in Loki; refer to [accept out-of-order writes](../../configuration/#accept-out-of-order-writes).
If out-of-order inserts are _disabled_, attempting to insert a log entry with an earlier timestamp after a log entry with identical labels but a later timestamp, the insert will fail with `HTTP status code: 500, message: rpc error: code = Unknown desc = Entry out of order`. Therefore, in order to use this plugin in a multi worker Fluentd setup, you'll need to include the worker ID in the labels or otherwise [ensure log streams are always sent to the same worker](https://docs.fluentd.org/deployment/multi-process-workers#less-than-worker-n-greater-than-directive).

For example, using [fluent-plugin-record-modifier](https://github.com/repeatedly/fluent-plugin-record-modifier):

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/clients/lambda-promtail/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ Incoming logs can have three special labels assigned to them which can be used i

### Promtail labels

Note: This section is relevant if running Promtail between lambda-promtail and the end Loki deployment (out of order workaround).
Note: This section is relevant if running Promtail between lambda-promtail and the end Loki deployment and was used to circumvent `out of order` problems prior to the v2.4 Loki release which removed the ordering constraint.

As stated earlier, this workflow moves the worst case stream cardinality from `number_of_log_streams` -> `number_of_log_groups` * `number_of_promtails`. For this reason, each Promtail must have a unique label attached to logs it processes (ideally via something like `--client.external-labels=promtail=${HOSTNAME}`) and it's advised to run a small number of Promtails behind a load balancer according to your throughput and redundancy needs.

Expand Down Expand Up @@ -149,4 +149,4 @@ Instead we can pipeline Cloudwatch logs to a set of Promtails, which can mitigat
1) Using Promtail's push api along with the `use_incoming_timestamp: false` config, we let Promtail determine the timestamp based on when it ingests the logs, not the timestamp assigned by cloudwatch. Obviously, this means that we lose the origin timestamp because Promtail now assigns it, but this is a relatively small difference in a real time ingestion system like this.
2) In conjunction with (1), Promtail can coalesce logs across Cloudwatch log streams because it's no longer susceptible to out-of-order errors when combining multiple sources (lambda invocations).

One important aspect to keep in mind when running with a set of Promtails behind a load balancer is that we're effectively moving the cardinality problems from the number of log streams -> number of Promtails. If you have not configured Loki to [accept out-of-order writes](../../configuration#accept-out-of-order-writes), you'll need to assign a Promtail-specific label on each Promtail so that you don't run into out-of-order errors when the Promtails send data for the same log groups to Loki. This can easily be done via a configuration like `--client.external-labels=promtail=${HOSTNAME}` passed to Promtail.
One important aspect to keep in mind when running with a set of Promtails behind a load balancer is that we're effectively moving the cardinality problems from the number of log streams -> number of Promtails. If you have not configured Loki to [accept out-of-order writes](../../configuration#accept-out-of-order-writes), you'll need to assign a Promtail-specific label on each Promtail so that you don't run into out-of-order errors when the Promtails send data for the same log groups to Loki. This can easily be done via a configuration like `--client.external-labels=promtail=${HOSTNAME}` passed to Promtail.

0 comments on commit 6eab7be

Please sign in to comment.