Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS: change format of unordered lists in technical docs #2724

Merged
merged 1 commit into from
Oct 5, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions docs/sources/architecture/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,9 @@ Query frontends are **stateless**. However, due to how the internal queue works,

The query frontend queuing mechanism is used to:

* Ensure that large queries, that could cause an out-of-memory (OOM) error in the querier, will be retried on failure. This allows administrators to under-provision memory for queries, or optimistically run more small queries in parallel, which helps to reduce the TCO.
* Prevent multiple large requests from being convoyed on a single querier by distributing them across all queriers using a first-in/first-out queue (FIFO).
* Prevent a single tenant from denial-of-service-ing (DOSing) other tenants by fairly scheduling queries between tenants.
- Ensure that large queries, that could cause an out-of-memory (OOM) error in the querier, will be retried on failure. This allows administrators to under-provision memory for queries, or optimistically run more small queries in parallel, which helps to reduce the TCO.
- Prevent multiple large requests from being convoyed on a single querier by distributing them across all queriers using a first-in/first-out queue (FIFO).
- Prevent a single tenant from denial-of-service-ing (DOSing) other tenants by fairly scheduling queries between tenants.

#### Splitting

Expand Down Expand Up @@ -277,16 +277,16 @@ The **chunk store** is Loki's long-term data store, designed to support
interactive querying and sustained writing without the need for background
maintenance tasks. It consists of:

* An index for the chunks. This index can be backed by:
* [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
* [Google Bigtable](https://cloud.google.com/bigtable)
* [Apache Cassandra](https://cassandra.apache.org)
* A key-value (KV) store for the chunk data itself, which can be:
* [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
* [Google Bigtable](https://cloud.google.com/bigtable)
* [Apache Cassandra](https://cassandra.apache.org)
* [Amazon S3](https://aws.amazon.com/s3)
* [Google Cloud Storage](https://cloud.google.com/storage/)
- An index for the chunks. This index can be backed by:
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
- A key-value (KV) store for the chunk data itself, which can be:
- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
- [Google Bigtable](https://cloud.google.com/bigtable)
- [Apache Cassandra](https://cassandra.apache.org)
- [Amazon S3](https://aws.amazon.com/s3)
- [Google Cloud Storage](https://cloud.google.com/storage/)

> Unlike the other core components of Loki, the chunk store is not a separate
> service, job, or process, but rather a library embedded in the two services
Expand All @@ -297,16 +297,16 @@ The chunk store relies on a unified interface to the
Cassandra) that can be used to back the chunk store index. This interface
assumes that the index is a collection of entries keyed by:

* A **hash key**. This is required for *all* reads and writes.
* A **range key**. This is required for writes and can be omitted for reads,
- A **hash key**. This is required for *all* reads and writes.
- A **range key**. This is required for writes and can be omitted for reads,
which can be queried by prefix or range.

The interface works somewhat differently across the supported databases:

* DynamoDB supports range and hash keys natively. Index entries are thus
- DynamoDB supports range and hash keys natively. Index entries are thus
modelled directly as DynamoDB entries, with the hash key as the distribution
key and the range as the DynamoDB range key.
* For Bigtable and Cassandra, index entries are modelled as individual column
- For Bigtable and Cassandra, index entries are modelled as individual column
values. The hash key becomes the row key and the range key becomes the column
key.

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/clients/aws/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ title: AWS

Sending logs from AWS services to Loki is a little different depending on what AWS service you are using:

* [EC2](ec2/)
* [ECS](ecs/)
* [EKS](eks/)
- [EC2](ec2/)
- [ECS](ecs/)
- [EKS](eks/)
4 changes: 2 additions & 2 deletions docs/sources/clients/fluentbit/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,8 +146,8 @@ If you don't want the `kubernetes` and `HOSTNAME` fields to appear in the log li

Buffering refers to the ability to store the records somewhere, and while they are processed and delivered, still be able to store more. Loki output plugin in certain situation can be blocked by loki client because of its design:

* BatchSize is over limit, output plugin pause receiving new records until the pending batch is successfully sent to the server
* Loki server is unreachable (retry 429s, 500s and connection-level errors), output plugin blocks new records until loki server will be available again and the pending batch is successfully sent to the server or as long as the maximum number of attempts has been reached within configured back-off mechanism
- BatchSize is over limit, output plugin pause receiving new records until the pending batch is successfully sent to the server
- Loki server is unreachable (retry 429s, 500s and connection-level errors), output plugin blocks new records until loki server will be available again and the pending batch is successfully sent to the server or as long as the maximum number of attempts has been reached within configured back-off mechanism

The blocking state with some of the input plugins is not acceptable because it can have a undesirable side effects on the part that generates the logs. Fluent Bit implements buffering mechanism that is based on parallel processing and it cannot send logs in order which is loki requirement (loki logs must be in increasing time order per stream).

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/clients/promtail/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,9 @@ Promtail can also be configured to receive logs from another Promtail or any Lok

There are a few instances where this might be helpful:

* complex network infrastructures where many machines having egress is not desirable.
* using the Docker Logging Driver and wanting to provide a complex pipeline or to extract metrics from logs.
* serverless setups where many ephemeral log sources want to send to Loki, sending to a Promtail instance with `use_incoming_timestamp` == false can avoid out of order errors and avoid having to use high cardinality labels.
- complex network infrastructures where many machines having egress is not desirable.
- using the Docker Logging Driver and wanting to provide a complex pipeline or to extract metrics from logs.
- serverless setups where many ephemeral log sources want to send to Loki, sending to a Promtail instance with `use_incoming_timestamp` == false can avoid out of order errors and avoid having to use high cardinality labels.

## Receiving logs From Syslog

Expand Down
Loading