diff --git a/docs/sources/community/contributing.md b/docs/sources/community/contributing.md index 5baaa66205fba..f5918dfe82f43 100644 --- a/docs/sources/community/contributing.md +++ b/docs/sources/community/contributing.md @@ -30,8 +30,10 @@ $ git commit -m "docs: fix spelling error" $ git push -u fork HEAD ``` -Note that if you downloaded Loki using `go get`, the message `package github.com/grafana/loki: no Go files in /go/src/github.com/grafana/loki` +{{% admonition type="note" %}} +If you downloaded Loki using `go get`, the message `package github.com/grafana/loki: no Go files in /go/src/github.com/grafana/loki` is normal and requires no actions to resolve. +{{% /admonition %}} ### Building diff --git a/docs/sources/community/design-documents/2021-01-Ordering-Constraint-Removal.md b/docs/sources/community/design-documents/2021-01-Ordering-Constraint-Removal.md index df3e511d50d69..64e59e21d07de 100644 --- a/docs/sources/community/design-documents/2021-01-Ordering-Constraint-Removal.md +++ b/docs/sources/community/design-documents/2021-01-Ordering-Constraint-Removal.md @@ -129,7 +129,10 @@ The performance losses against the current approach includes: Loki regularly combines multiple blocks into a chunk and "flushes" it to storage. In order to ensure that reads over flushed chunks remain as performant as possible, we will re-order a possibly-overlapping set of blocks into a set of blocks that maintain monotonically increasing order between them. From the perspective of the rest of Loki’s components (queriers/rulers fetching chunks from storage), nothing has changed. -**Note: In the case that data for a stream is ingested in order, this is effectively a no-op, making it well optimized for in-order writes (which is both the requirement and default in Loki currently). Thus, this should have little performance impact on ordered data while enabling Loki to ingest unordered data.** +{{% admonition type="note" %}} +**In the case that data for a stream is ingested in order, this is effectively a no-op, making it well optimized for in-order writes (which is both the requirement and default in Loki currently). Thus, this should have little performance impact on ordered data while enabling Loki to ingest unordered data.** +{{% /admonition %}} + #### Chunk Durations @@ -150,7 +153,9 @@ The second is simple to implement and an effective way to ensure Loki can ingest We also cut chunks according to the `sync_period`. The first timestamp ingested past this bound will trigger a cut. This process aids in increasing chunk determinism and therefore our deduplication ratio in object storage because chunks are [content addressed](https://en.wikipedia.org/wiki/Content-addressable_storage). With the removal of our ordering constraint, it's possible that in some cases the synchronization method will not be as effective, such as during concurrent writes to the same stream across this bound. -**Note: It's important to mention that this is possible today with the current ordering constraint, but we'll be increasing the likelihood by removing it** +{{% admonition type="note" %}} +**It's important to mention that this is possible today with the current ordering constraint, but we'll be increasing the likelihood by removing it.** +{{% /admonition %}} ``` Figure 5 diff --git a/docs/sources/community/maintaining/release-loki-build-image.md b/docs/sources/community/maintaining/release-loki-build-image.md index ceeb799f4c686..d6e1f15b1d817 100644 --- a/docs/sources/community/maintaining/release-loki-build-image.md +++ b/docs/sources/community/maintaining/release-loki-build-image.md @@ -16,19 +16,21 @@ if any changes were made in the folder `./loki-build-image/`. ## Step 1 -1. create a branch with the desired changes to the Dockerfile -2. update the version tag of the `loki-build-image` pipeline defined in `.drone/drone.jsonnet` (search for `pipeline('loki-build-image')`) to a new version number (try to follow semver) -3. run `DRONE_SERVER=https://drone.grafana.net/ DRONE_TOKEN= make drone` and commit the changes to the same branch - 1. the `` is your personal drone token, which can be found by navigating to https://drone.grafana.net/account. -4. create a PR -5. once approved and merged to `main`, the image with the new version is built and published - - **Note:** keep an eye on https://drone.grafana.net/grafana/loki for the build after merging ([example](https://drone.grafana.net/grafana/loki/17760/1/2)) +1. Create a branch with the desired changes to the Dockerfile. +2. Update the version tag of the `loki-build-image` pipeline defined in `.drone/drone.jsonnet` (search for `pipeline('loki-build-image')`) to a new version number (try to follow semver). +3. Run `DRONE_SERVER=https://drone.grafana.net/ DRONE_TOKEN= make drone` and commit the changes to the same branch. + 1. The `` is your personal drone token, which can be found by navigating to https://drone.grafana.net/account. +4. Create a PR. +5. Once approved and merged to `main`, the image with the new version is built and published. + {{% admonition type="note" %}} + Keep an eye on https://drone.grafana.net/grafana/loki for the build after merging ([example](https://drone.grafana.net/grafana/loki/17760/1/2)). + {{% /admonition %}} ## Step 2 -1. create a branch -2. update the `BUILD_IMAGE_VERSION` variable in the `Makefile` -3. run `loki-build-image/version-updater.sh ` to update all the references -4. run `DRONE_SERVER=https://drone.grafana.net/ DRONE_TOKEN= make drone` to update the Drone config to use the new build image -5. create a new PR +1. Create a branch. +2. Update the `BUILD_IMAGE_VERSION` variable in the `Makefile`. +3. Run `loki-build-image/version-updater.sh ` to update all the references. +4. Run `DRONE_SERVER=https://drone.grafana.net/ DRONE_TOKEN= make drone` to update the Drone config to use the new build image. +5. Create a new PR. diff --git a/docs/sources/get-started/deployment-modes.md b/docs/sources/get-started/deployment-modes.md index d0b590086cfee..5b4766f65253e 100644 --- a/docs/sources/get-started/deployment-modes.md +++ b/docs/sources/get-started/deployment-modes.md @@ -75,11 +75,14 @@ For release 2.9 the components are: - Ruler - Table Manager (deprecated) -TIP: You can see the complete list of targets for your version of Loki by running Loki with the flag `-list-targets`, for example: +{{% admonition type="tip" %}} +You can see the complete list of targets for your version of Loki by running Loki with the flag `-list-targets`, for example: ```bash docker run docker.io/grafana/loki:2.9.2 -config.file=/etc/loki/local-config.yaml -list-targets ``` +{{% /admonition %}} + ![Microservices mode diagram](../microservices-mode.png "Microservices mode") Running components as individual microservices provides more granularity, letting you scale each component as individual microservices, to better match your specific use case. diff --git a/docs/sources/get-started/labels/_index.md b/docs/sources/get-started/labels/_index.md index 95deccc52bf11..e33f36d91f419 100644 --- a/docs/sources/get-started/labels/_index.md +++ b/docs/sources/get-started/labels/_index.md @@ -28,13 +28,10 @@ See [structured metadata]({{< relref "./structured-metadata" >}}) for more infor Loki places the same restrictions on label naming as [Prometheus](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels): -> It may contain ASCII letters and digits, as well as underscores and colons. It must match the regex `[a-zA-Z_:][a-zA-Z0-9_:]*`. -> -> Note: The colons are reserved for user defined recording rules. They should not be used by exporters or direct instrumentation. +- It may contain ASCII letters and digits, as well as underscores and colons. It must match the regex `[a-zA-Z_:][a-zA-Z0-9_:]*`. +- The colons are reserved for user defined recording rules. They should not be used by exporters or direct instrumentation. +- Unsupported characters in the label should be converted to an underscore. For example, the label `app.kubernetes.io/name` should be written as `app_kubernetes_io_name`. -{{% admonition type="note" %}} -Unsupported characters in the label should be converted to an underscore. For example, the label `app.kubernetes.io/name` should be written as `app_kubernetes_io_name` -{{% /admonition %}} ## Loki labels demo diff --git a/docs/sources/operations/authentication.md b/docs/sources/operations/authentication.md index 4235959b1f2c6..96081dbab52e7 100644 --- a/docs/sources/operations/authentication.md +++ b/docs/sources/operations/authentication.md @@ -18,10 +18,11 @@ A list of open-source reverse proxies you can use: - [OAuth2 proxy](https://github.com/oauth2-proxy/oauth2-proxy) - [HAProxy](https://www.haproxy.org/) -Note that when using Loki in multi-tenant mode, Loki requires the HTTP header +{{% admonition type="note" %}} +When using Loki in multi-tenant mode, Loki requires the HTTP header `X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility of populating this value should be handled by the authenticating reverse proxy. -For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation. +For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{% /admonition %}} For information on authenticating Promtail, see the documentation for [how to configure Promtail]({{< relref "../send-data/promtail/configuration" >}}). diff --git a/docs/sources/operations/recording-rules.md b/docs/sources/operations/recording-rules.md index dfef1bf0e4702..afac69b75e271 100644 --- a/docs/sources/operations/recording-rules.md +++ b/docs/sources/operations/recording-rules.md @@ -30,9 +30,12 @@ is that Prometheus will, for example, reject a remote-write request with 100 sam When the `ruler` starts up, it will load the WALs for the tenants who have recording rules. These WAL files are stored on disk and are loaded into memory. -Note: WALs are loaded one at a time upon start-up. This is a current limitation of the Loki ruler. +{{% admonition type="note" %}} +WALs are loaded one at a time upon start-up. This is a current limitation of the Loki ruler. For this reason, it is adviseable that the number of rule groups serviced by a ruler be kept to a reasonable size, since _no rule evaluation occurs while WAL replay is in progress (this includes alerting rules)_. +{{% /admonition %}} + ### Truncation @@ -52,8 +55,11 @@ excessively large due to truncation. ## Scaling See Mimir's guide for [configuring Grafana Mimir hash rings](/docs/mimir/latest/configure/configure-hash-rings/) for scaling the ruler using a ring. -Note: the `ruler` shards by rule _group_, not by individual rules. This is an artifact of the fact that Prometheus + +{{% admonition type="note" %}} +The `ruler` shards by rule _group_, not by individual rules. This is an artifact of the fact that Prometheus recording rules need to run in order since one recording rule can reuse another - but this is not possible in Loki. +{{% /admonition %}} ## Deployment diff --git a/docs/sources/operations/storage/boltdb-shipper.md b/docs/sources/operations/storage/boltdb-shipper.md index 7f29e1c23a865..df32b95f3eedf 100644 --- a/docs/sources/operations/storage/boltdb-shipper.md +++ b/docs/sources/operations/storage/boltdb-shipper.md @@ -7,7 +7,7 @@ weight: 200 # Single Store BoltDB (boltdb-shipper) {{% admonition type="note" %}} -Note that single store BoltDB Shipper is a legacy storage option and is not recommended for new deployments. The [TSDB]({{< relref "./tsdb" >}}) index is the recommended index. +Single store BoltDB Shipper is a legacy storage option and is not recommended for new deployments. The [TSDB]({{< relref "./tsdb" >}}) index is the recommended index. {{% /admonition %}} BoltDB Shipper lets you run Grafana Loki without any dependency on NoSQL stores for storing index. @@ -75,7 +75,10 @@ they both having shipped files for day `18371` and `18372` with prefix `loki_ind └── ingester-1-1587254400.gz ... ``` -**Note:** We also add a timestamp to names of the files to randomize the names to avoid overwriting files when running Ingesters with same name and not have a persistent storage. Timestamps not shown here for simplification. + +{{% admonition type="note" %}} +Loki also adds a timestamp to names of the files to randomize the names to avoid overwriting files when running Ingesters with same name and not have a persistent storage. Timestamps not shown here for simplification. +{{% /admonition %}} Let us talk about more in depth about how both Ingesters and Queriers work when running them with BoltDB Shipper. @@ -86,7 +89,9 @@ and the BoltDB Shipper looks for new and updated files in that directory at 1 mi When running Loki in microservices mode, there could be multiple ingesters serving write requests. Each ingester generates BoltDB files locally. -**Note:** To avoid any loss of index when an ingester crashes, we recommend running ingesters as a statefulset (when using Kubernetes) with a persistent storage for storing index files. +{{% admonition type="note" %}} +To avoid any loss of index when an ingester crashes, we recommend running ingesters as a StatefulSet (when using Kubernetes) with a persistent storage for storing index files. +{{% /admonition %}} When chunks are flushed, they are available for reads in the object store instantly. The index is not available instantly, since we upload every 15 minutes with the BoltDB shipper. Ingesters expose a new RPC for letting queriers query the ingester's local index for chunks which were recently flushed, but its index might not be available yet with queriers. @@ -132,7 +137,9 @@ While using `boltdb-shipper` avoid configuring WriteDedupe cache since it is use Compactor is a BoltDB Shipper specific service that reduces the index size by deduping the index and merging all the files to a single file per table. We recommend running a Compactor since a single Ingester creates 96 files per day which include a lot of duplicate index entries and querying multiple files per table adds up the overall query latency. -**Note:** There should be only 1 compactor instance running at a time that otherwise could create problems and may lead to data loss. +{{% admonition type="note" %}} +There should be only one compactor instance running at a time that otherwise could create problems and may lead to data loss. +{{% /admonition %}} Example compactor configuration with GCS: diff --git a/docs/sources/operations/storage/table-manager/_index.md b/docs/sources/operations/storage/table-manager/_index.md index 81b835a11382a..0e6ba42cc71ff 100644 --- a/docs/sources/operations/storage/table-manager/_index.md +++ b/docs/sources/operations/storage/table-manager/_index.md @@ -145,9 +145,11 @@ number_of_tables_to_keep = floor(retention_period / table_period) + 1 ![retention](./table-manager-retention.png) +{{% admonition type="note" %}} It's important to note that - due to the internal implementation - the table `period` and `retention_period` **must** be multiples of `24h` in order to get the expected behavior. +{{% /admonition %}} For detailed information on configuring the retention, refer to the [Loki Storage Retention]({{< relref "../retention" >}}) diff --git a/docs/sources/operations/storage/wal.md b/docs/sources/operations/storage/wal.md index 45f8c396cccac..6baf78adc5f4e 100644 --- a/docs/sources/operations/storage/wal.md +++ b/docs/sources/operations/storage/wal.md @@ -21,13 +21,13 @@ The Write Ahead Log in Loki takes a few particular tradeoffs compared to other W In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of its data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting. -Note: the Prometheus metric `loki_ingester_wal_corruptions_total` can be used to track and alert when this happens. +You can use the Prometheus metric `loki_ingester_wal_corruptions_total` to track and alert when this happens. 1) No space left on disk In the event the underlying WAL disk is full, Loki will not fail incoming writes, but neither will it log them to the WAL. In this case, the persistence guarantees across process restarts will not hold. -Note: the Prometheus metric `loki_ingester_wal_disk_full_failures_total` can be used to track and alert when this happens. +You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` to track and alert when this happens. ### Backpressure diff --git a/docs/sources/query/logcli.md b/docs/sources/query/logcli.md index 0d870c44150d0..297730a589ee4 100644 --- a/docs/sources/query/logcli.md +++ b/docs/sources/query/logcli.md @@ -70,9 +70,11 @@ without needing a username and password: export LOKI_ADDR=http://localhost:3100 ``` -> Note: If you are running Loki behind a proxy server and you have -> authentication configured, you will also have to pass in LOKI_USERNAME -> and LOKI_PASSWORD, LOKI_BEARER_TOKEN or LOKI_BEARER_TOKEN_FILE accordingly. +{{% admonition type="note" %}} +If you are running Loki behind a proxy server and you have +authentication configured, you will also have to pass in LOKI_USERNAME +and LOKI_PASSWORD, LOKI_BEARER_TOKEN or LOKI_BEARER_TOKEN_FILE accordingly. +{{% /admonition %}} ```bash $ logcli labels job @@ -512,7 +514,9 @@ You can consume log lines from your `stdin` instead of Loki servers. Say you have log files in your local, and just want to do run some LogQL queries for that, `--stdin` flag can help. -**NOTE: Currently it doesn't support any type of metric queries** +{{% admonition type="note" %}} +Currently it doesn't support any type of metric queries. +{{% /admonition %}} You may have to use `stdin` flag for several reasons 1. Quick way to check and validate a LogQL expressions. diff --git a/docs/sources/send-data/lambda-promtail/_index.md b/docs/sources/send-data/lambda-promtail/_index.md index 0d39a75f143eb..783b5d231bb94 100644 --- a/docs/sources/send-data/lambda-promtail/_index.md +++ b/docs/sources/send-data/lambda-promtail/_index.md @@ -99,7 +99,9 @@ Ephemeral jobs can quite easily run afoul of cardinality best practices. During For those using Cloudwatch and wishing to test out Loki in a low-risk way, this workflow allows piping Cloudwatch logs to Loki regardless of the event source (EC2, Kubernetes, Lambda, ECS, etc) without setting up a set of Promtail daemons across their infrastructure. However, running Promtail as a daemon on your infrastructure is the best-practice deployment strategy in the long term for flexibility, reliability, performance, and cost. -Note: Propagating logs from Cloudwatch to Loki means you'll still need to _pay_ for Cloudwatch. +{{% admonition type="note" %}} +Propagating logs from Cloudwatch to Loki means you'll still need to _pay_ for Cloudwatch. +{{% /admonition %}} ### VPC Flow logs @@ -163,7 +165,9 @@ Incoming logs can have seven special labels assigned to them which can be used i ### Promtail labels -Note: This section is relevant if running Promtail between lambda-promtail and the end Loki deployment and was used to circumvent `out of order` problems prior to the v2.4 Loki release which removed the ordering constraint. +{{% admonition type="note" %}} +This section is relevant if running Promtail between lambda-promtail and the end Loki deployment and was used to circumvent `out of order` problems prior to the v2.4 Loki release which removed the ordering constraint. +{{% /admonition %}} As stated earlier, this workflow moves the worst case stream cardinality from `number_of_log_streams` -> `number_of_log_groups` * `number_of_promtails`. For this reason, each Promtail must have a unique label attached to logs it processes (ideally via something like `--client.external-labels=promtail=${HOSTNAME}`) and it's advised to run a small number of Promtails behind a load balancer according to your throughput and redundancy needs. @@ -191,7 +195,9 @@ The provided Terraform and CloudFormation files are meant to cover the default u ## Example Promtail Config -Note: this should be run in conjunction with a Promtail-specific label attached, ideally via a flag argument like `--client.external-labels=promtail=${HOSTNAME}`. It will receive writes via the push-api on ports `3500` (http) and `3600` (grpc). +{{% admonition type="note" %}} +This should be run in conjunction with a Promtail-specific label attached, ideally via a flag argument like `--client.external-labels=promtail=${HOSTNAME}`. It will receive writes via the push-api on ports `3500` (http) and `3600` (grpc). +{{% /admonition %}} ```yaml server: diff --git a/docs/sources/send-data/promtail/stages/drop.md b/docs/sources/send-data/promtail/stages/drop.md index 2acc2443ba856..77b66020bb23f 100644 --- a/docs/sources/send-data/promtail/stages/drop.md +++ b/docs/sources/send-data/promtail/stages/drop.md @@ -126,7 +126,9 @@ Would drop this log line: #### Drop old log lines -**NOTE** For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage. +{{% admonition type="note" %}} +For `older_than` to work, you must be using the [timestamp]({{< relref "./timestamp" >}}) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage. +{{% /admonition %}} Given the pipeline: diff --git a/docs/sources/send-data/promtail/stages/json.md b/docs/sources/send-data/promtail/stages/json.md index 2f3c1bd44c733..6babe1f60700e 100644 --- a/docs/sources/send-data/promtail/stages/json.md +++ b/docs/sources/send-data/promtail/stages/json.md @@ -134,5 +134,7 @@ The following key-value pairs would be created in the set of extracted data: - `stream`: `stderr` - `timestamp`: `2019-04-30T02:12:41.8443515` -Note that referring to `grpc.stream` without the combination of double quotes +{{% admonition type="note" %}} +Referring to `grpc.stream` without the combination of double quotes wrapped in single quotes will not work properly. +{{% /admonition %}} diff --git a/docs/sources/setup/install/local.md b/docs/sources/setup/install/local.md index 91d972b6368e7..dbdeb8ca3a164 100644 --- a/docs/sources/setup/install/local.md +++ b/docs/sources/setup/install/local.md @@ -34,10 +34,10 @@ The configuration specifies running Loki as a single binary. 1. Navigate to the [release page](https://github.com/grafana/loki/releases/). 2. Scroll down to the Assets section under the version that you want to install. 3. Download the Loki and Promtail .zip files that correspond to your system. - **Note:** Do not download LogCLI or Loki Canary at this time. `LogCLI` allows you to run Loki queries in a command line interface. [Loki Canary]({{< relref "../../operations/loki-canary" >}}) is a tool to audit Loki performance. + Do not download LogCLI or Loki Canary at this time. `LogCLI` allows you to run Loki queries in a command line interface. [Loki Canary]({{< relref "../../operations/loki-canary" >}}) is a tool to audit Loki performance. 4. Unzip the package contents into the same directory. This is where the two programs will run. 5. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail. Copy and paste the commands below into your command line to download generic configuration files. - **Note:** Use the corresponding Git refs that match your downloaded Loki version to get the correct configuration file. For example, if you are using Loki version 2.9.2, you need to use the `https://raw.githubusercontent.com/grafana/loki/v2.9.2/cmd/loki/loki-local-config.yaml` URL to download the configuration file that corresponds to the Loki version you aim to run. + Use the corresponding Git refs that match your downloaded Loki version to get the correct configuration file. For example, if you are using Loki version 2.9.2, you need to use the `https://raw.githubusercontent.com/grafana/loki/v2.9.2/cmd/loki/loki-local-config.yaml` URL to download the configuration file that corresponds to the Loki version you aim to run. ``` wget https://raw.githubusercontent.com/grafana/loki/main/cmd/loki/loki-local-config.yaml diff --git a/docs/sources/setup/upgrade/_index.md b/docs/sources/setup/upgrade/_index.md index e9483e5219409..663201820e1e6 100644 --- a/docs/sources/setup/upgrade/_index.md +++ b/docs/sources/setup/upgrade/_index.md @@ -372,8 +372,10 @@ limits_config: retention_period: 744h ``` -**Note:** In previous versions, the zero value of `0` or `0s` will result in **immediate deletion of all logs**, +{{% admonition type="note" %}} +In previous versions, the zero value of `0` or `0s` will result in **immediate deletion of all logs**, only in 2.8 and forward releases does the zero value disable retention. +{{% /admonition %}} #### metrics.go log line `subqueries` replaced with `splits` and `shards` @@ -387,7 +389,9 @@ In 2.8 we no longer include `subqueries` in metrics.go, it does still exist in t Instead, now you can use `splits` to see how many split by time intervals were created and `shards` to see the total number of shards created for a query. -Note: currently not every query can be sharded and a shards value of zero is a good indicator the query was not able to be sharded. +{{% admonition type="note" %}} +Currently not every query can be sharded and a shards value of zero is a good indicator the query was not able to be sharded. +{{% /admonition %}} ### Promtail @@ -418,7 +422,9 @@ ruler: #### query-frontend Kubernetes headless service changed to load balanced service -*Note:* This is relevant only if you are using [jsonnet for deploying Loki in Kubernetes](/docs/loki/latest/installation/tanka/) +{{% admonition type="note" %}} +This is relevant only if you are using [jsonnet for deploying Loki in Kubernetes](/docs/loki/latest/installation/tanka/). +{{% /admonition %}} The `query-frontend` Kubernetes service was previously headless and was used for two purposes: * Distributing the Loki query requests amongst all the available Query Frontend pods. @@ -951,7 +957,9 @@ In Loki 2.2 we changed the internal version of our chunk format from v2 to v3, t This makes it important to first upgrade to 2.0, 2.0.1, or 2.1 **before** upgrading to 2.2 so that if you need to rollback for any reason you can do so easily. -**Note:** 2.0 and 2.0.1 are identical in every aspect except 2.0.1 contains the code necessary to read the v3 chunk format. Therefor if you are on 2.0 and ugrade to 2.2, if you want to rollback, you must rollback to 2.0.1. +{{% admonition type="note" %}} +2.0 and 2.0.1 are identical in every aspect except 2.0.1 contains the code necessary to read the v3 chunk format. Therefor if you are on 2.0 and ugrade to 2.2, if you want to rollback, you must rollback to 2.0.1. +{{% /admonition %}} ### Loki Config @@ -1095,9 +1103,14 @@ This likely only affects a small portion of tanka users because the default sche } ``` ->**NOTE** If you had set `index_period_hours` to a value other than 168h (the previous default) you must update this in the above config `period:` to match what you chose. +{{% admonition type="note" %}} +If you had set `index_period_hours` to a value other than 168h (the previous default) you must update this in the above config `period:` to match what you chose. +{{% /admonition %}} + ->**NOTE** We have changed the default index store to `boltdb-shipper` it's important to add `using_boltdb_shipper: false,` until you are ready to change (if you want to change) +{{% admonition type="note" %}} +We have changed the default index store to `boltdb-shipper` it's important to add `using_boltdb_shipper: false,` until you are ready to change (if you want to change) +{{% /admonition %}} Changing the jsonnet config to use the `boltdb-shipper` type is the same as [below](#upgrading-schema-to-use-boltdb-shipper-andor-v11-schema) where you need to add a new schema section. @@ -1139,9 +1152,9 @@ _THIS BEING SAID_ we are not expecting problems, our testing so far has not unco Report any problems via GitHub issues or reach us on the #loki slack channel. -**Note if are using boltdb-shipper and were running with high availability and separate filesystems** - -This was a poorly documented and even more experimental mode we toyed with using boltdb-shipper. For now we removed the documentation and also any kind of support for this mode. +{{% admonition type="note" %}} +If are using boltdb-shipper and were running with high availability and separate filesystems, this was a poorly documented and even more experimental mode we toyed with using boltdb-shipper. For now we removed the documentation and also any kind of support for this mode. +{{% /admonition %}} To use boltdb-shipper in 2.0 you need a shared storage (S3, GCS, etc), the mode of running with separate filesystem stores in HA using a ring is not officially supported. @@ -1284,7 +1297,9 @@ schema_config: ``` If you are not on `schema: v11` this would be a good opportunity to make that change _in the new schema config_ also. -**NOTE** If the current time in your timezone is after midnight UTC already, set the date one additional day forward. +{{% admonition type="note" %}} +If the current time in your timezone is after midnight UTC already, set the date one additional day forward. +{{% /admonition %}} There was also a significant overhaul to how boltdb-shipper internals, this should not be visible to a user but as this feature is experimental and under development bug are possible! @@ -1343,7 +1358,9 @@ Defaulting to `gcs,bigtable` was confusing for anyone using ksonnet with other s ## 1.5.0 -Note: The required upgrade path outlined for version 1.4.0 below is still true for moving to 1.5.0 from any release older than 1.4.0 (e.g. 1.3.0->1.5.0 needs to also look at the 1.4.0 upgrade requirements). +{{% admonition type="note" %}} +The required upgrade path outlined for version 1.4.0 below is still true for moving to 1.5.0 from any release older than 1.4.0 (e.g. 1.3.0 -> 1.5.0 needs to also look at the 1.4.0 upgrade requirements). +{{% /admonition %}} ### Breaking config changes! @@ -1397,7 +1414,9 @@ Not every environment will allow this capability however, it's possible to restr #### Filesystem -**Note the location Loki is looking for files with the provided config in the docker image has changed** +{{% admonition type="note" %}} +The location Loki is looking for files with the provided config in the docker image has changed. +{{% /admonition %}} In 1.4.0 and earlier the included config file in the docker container was using directories: @@ -1498,7 +1517,7 @@ The other config changes should not be relevant to Loki. The newly vendored version of Cortex removes code related to de-normalized tokens in the ring. What you need to know is this: -*Note:* A "shared ring" as mentioned below refers to using *consul* or *etcd* for values in the following config: +A "shared ring" as mentioned below refers to using *consul* or *etcd* for values in the following config: ```yaml kvstore: @@ -1517,14 +1536,14 @@ There are two options for upgrade if you are not on version 1.3.0 and are using OR -**Note:** If you are running a single binary you only need to add this flag to your single binary command. +- If you are running a single binary you only need to add this flag to your single binary command. 1. Add the following configuration to your ingesters command: `-ingester.normalise-tokens=true` 1. Restart your ingesters with this config 1. Proceed with upgrading to v1.4.0 1. Remove the config option (only do this after everything is running v1.4.0) -**Note:** It's also possible to enable this flag via config file, see the [`lifecycler_config`](https://github.com/grafana/loki/tree/v1.3.0/docs/configuration#lifecycler_config) configuration option. +It is also possible to enable this flag via config file, see the [`lifecycler_config`](https://github.com/grafana/loki/tree/v1.3.0/docs/configuration#lifecycler_config) configuration option. If using the Helm Loki chart: diff --git a/docs/sources/storage/_index.md b/docs/sources/storage/_index.md index 81a767e1add31..bbbebf756fc73 100644 --- a/docs/sources/storage/_index.md +++ b/docs/sources/storage/_index.md @@ -82,7 +82,9 @@ You may use any substitutable services, such as those that implement the S3 API Cassandra is a popular database and one of Loki's possible chunk stores and is production safe. -> **Note:** This storage type for chunks is deprecated and may be removed in future major versions of Loki. +{{< collapse title="Title of hidden content" >}} +This storage type for chunks is deprecated and may be removed in future major versions of Loki. +{{< /collapse >}} ## Index storage @@ -90,19 +92,25 @@ Cassandra is a popular database and one of Loki's possible chunk stores and is p Cassandra can also be utilized for the index store and aside from the [boltdb-shipper]({{< relref "../operations/storage/boltdb-shipper" >}}), it's the only non-cloud offering that can be used for the index that's horizontally scalable and has configurable replication. It's a good candidate when you already run Cassandra, are running on-prem, or do not wish to use a managed cloud offering. -> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< collapse title="Title of hidden content" >}} +This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< /collapse >}} ### BigTable (deprecated) Bigtable is a cloud database offered by Google. It is a good candidate for a managed index store if you're already using it (due to its heavy fixed costs) or wish to run in GCP. -> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< collapse title="Title of hidden content" >}} +This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< /collapse >}} ### DynamoDB (deprecated) DynamoDB is a cloud database offered by AWS. It is a good candidate for a managed index store, especially if you're already running in AWS. -> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< collapse title="Title of hidden content" >}} +This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< /collapse >}} #### Rate limiting @@ -112,7 +120,9 @@ DynamoDB is susceptible to rate limiting, particularly due to overconsuming what BoltDB is an embedded database on disk. It is not replicated and thus cannot be used for high availability or clustered Loki deployments, but is commonly paired with a `filesystem` chunk store for proof of concept deployments, trying out Loki, and development. The [boltdb-shipper]({{< relref "../operations/storage/boltdb-shipper" >}}) aims to support clustered deployments using `boltdb` as an index. -> **Note:** This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< collapse title="Title of hidden content" >}} +This storage type for indexes is deprecated and may be removed in future major versions of Loki. +{{< /collapse >}} ## Schema Config @@ -428,7 +438,9 @@ storage_config: ### On premise deployment (Cassandra+Cassandra) -> **Note:** Cassandra as storage backend for chunks and indexes is deprecated. +{{< collapse title="Title of hidden content" >}} +Cassandra as storage backend for chunks and indexes is deprecated. +{{< /collapse >}} **Keeping this for posterity, but this is likely not a common config. Cassandra should work and could be faster in some situations but is likely much more expensive.**