diff --git a/docs/sources/alert/_index.md b/docs/sources/alert/_index.md index 7eeb0962d7acf..e7a42b1ef4706 100644 --- a/docs/sources/alert/_index.md +++ b/docs/sources/alert/_index.md @@ -168,15 +168,21 @@ Another great use case is alerting on high cardinality sources. These are things Creating these alerts in LogQL is attractive because these metrics can be extracted at _query time_, meaning we don't suffer the cardinality explosion in our metrics store. -> **Note** As an example, we can use LogQL v2 to help Loki to monitor _itself_, alerting us when specific tenants have queries that take longer than 10s to complete! To do so, we'd use the following query: `sum by (org_id) (rate({job="loki-prod/query-frontend"} |= "metrics.go" | logfmt | duration > 10s [1m]))` +{{% admonition type="note" %}} +As an example, we can use LogQL v2 to help Loki to monitor _itself_, alerting us when specific tenants have queries that take longer than 10s to complete! To do so, we'd use the following query: `sum by (org_id) (rate({job="loki-prod/query-frontend"} |= "metrics.go" | logfmt | duration 10s [1m]))` +{{% /admonition %}} ## Interacting with the Ruler Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via [`cortextool`](https://github.com/grafana/cortex-tools#rules). The CLI is in early development, but it works with both Loki and Cortex. Pass the `--backend=loki` option when using it with Loki. -> **Note:** Not all commands in cortextool currently support Loki. +{{% admonition type="note" %}} +Not all commands in `cortextool` currently support Loki. +{{% /admonition %}} -> **Note:** cortextool was intended to run against multi-tenant Loki, commands need an `--id=` flag set to the Loki instance ID or set the environment variable `CORTEX_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake` (yes we know this might seem alarming but it's totally fine, no it can't be changed) +{{% admonition type="note" %}} +`cortextool` was intended to run against multi-tenant Loki, commands need an `--id=` flag set to the Loki instance ID or set the environment variable `CORTEX_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake` (yes we know this might seem alarming but it's totally fine, no it can't be changed) +{{% /admonition %}} An example workflow is included below: diff --git a/docs/sources/api/_index.md b/docs/sources/api/_index.md index cb6a0bc9a2b62..944b30dbc2182 100644 --- a/docs/sources/api/_index.md +++ b/docs/sources/api/_index.md @@ -1188,8 +1188,9 @@ curl -u "Tenant1:$API_TOKEN" \ ### `GET /api/prom/tail` -> **DEPRECATED**: `/api/prom/tail` is deprecated. Use `/loki/api/v1/tail` -> instead. +{{% admonition type="warning" %}} +`/api/prom/tail` is deprecated. Use `/loki/api/v1/tail` instead. +{{% /admonition %}} `/api/prom/tail` is a WebSocket endpoint that will stream log messages based on a query. It accepts the following query parameters in the URL: @@ -1237,8 +1238,9 @@ will be sent over the WebSocket multiple times. ### `GET /api/prom/query` -> **WARNING**: `/api/prom/query` is DEPRECATED; use `/loki/api/v1/query_range` -> instead. +{{% admonition type="warning" %}} +`/api/prom/query` is DEPRECATED; use `/loki/api/v1/query_range` instead. +{{% /admonition %}} `/api/prom/query` supports doing general queries. The URL query parameters support the following values: @@ -1307,7 +1309,9 @@ $ curl -G -s "http://localhost:3100/api/prom/query" --data-urlencode 'query={foo ### `GET /api/prom/label//values` -> **WARNING**: `/api/prom/label//values` is DEPRECATED; use `/loki/api/v1/label//values` +{{% admonition type="warning" %}} +`/api/prom/label//values` is DEPRECATED; use `/loki/api/v1/label//values` +{{% /admonition %}} `/api/prom/label//values` retrieves the list of known values for a given label within a given time span. It accepts the following query parameters in @@ -1345,7 +1349,9 @@ $ curl -G -s "http://localhost:3100/api/prom/label/foo/values" | jq ### `GET /api/prom/label` -> **WARNING**: `/api/prom/label` is DEPRECATED; use `/loki/api/v1/labels` +{{% admonition type="warning" %}} +`/api/prom/label` is DEPRECATED; use `/loki/api/v1/labels` +{{% /admonition %}} `/api/prom/label` retrieves the list of known labels within a given time span. It accepts the following query parameters in the URL: @@ -1382,8 +1388,9 @@ $ curl -G -s "http://localhost:3100/api/prom/label" | jq ### `POST /api/prom/push` -> **WARNING**: `/api/prom/push` is DEPRECATED; use `/loki/api/v1/push` -> instead. +{{% admonition type="warning" %}} +`/api/prom/push` is DEPRECATED; use `/loki/api/v1/push` instead. +{{% /admonition %}} `/api/prom/push` is the endpoint used to send log entries to Loki. The default behavior is for the POST body to be a snappy-compressed protobuf message: @@ -1421,8 +1428,9 @@ $ curl -H "Content-Type: application/json" -XPOST -s "https://localhost:3100/api ### `POST /ingester/flush_shutdown` -> **WARNING**: `/ingester/flush_shutdown` is DEPRECATED; use `/ingester/shutdown?flush=true` -> instead. +{{% admonition type="warning" %}} +`/ingester/flush_shutdown` is DEPRECATED; use `/ingester/shutdown?flush=true` instead. +{{% /admonition %}} `/ingester/flush_shutdown` triggers a shutdown of the ingester and notably will _always_ flush any in memory chunks it holds. This is helpful for scaling down WAL-enabled ingesters where we want to ensure old WAL directories are not orphaned, diff --git a/docs/sources/clients/docker-driver/configuration.md b/docs/sources/clients/docker-driver/configuration.md index 93a8f9630f058..f37b0736948c7 100644 --- a/docs/sources/clients/docker-driver/configuration.md +++ b/docs/sources/clients/docker-driver/configuration.md @@ -30,9 +30,11 @@ docker run --log-driver=loki \ grafana/grafana ``` -> **Note**: The Loki logging driver still uses the json-log driver in combination with sending logs to Loki, this is mainly useful to keep the `docker logs` command working. -> You can adjust file size and rotation using the respective log option `max-size` and `max-file`. Keep in mind that default values for these options are not taken from json-log configuration. -> You can deactivate this behavior by setting the log option `no-file` to true. +{{% admonition type="note" %}} +The Loki logging driver still uses the json-log driver in combination with sending logs to Loki, this is mainly useful to keep the `docker logs` command working. +You can adjust file size and rotation using the respective log option `max-size` and `max-file`. Keep in mind that default values for these options are not taken from json-log configuration. +You can deactivate this behavior by setting the log option `no-file` to true. +{{% /admonition %}} ## Change the default logging driver @@ -61,9 +63,11 @@ Options for the logging driver can also be configured with `log-opts` in the } ``` -> **Note**: log-opt configuration options in daemon.json must be provided as -> strings. Boolean and numeric values (such as the value for loki-batch-size in -> the example above) must therefore be enclosed in quotes (`"`). +{{% admonition type="note" %}} +log-opt configuration options in daemon.json must be provided as +strings. Boolean and numeric values (such as the value for loki-batch-size in +the example above) must therefore be enclosed in quotes (`"`). +{{% /admonition %}} After changing `daemon.json`, restart the Docker daemon for the changes to take effect. All **newly created** containers from that host will then send logs to Loki via the driver. @@ -98,9 +102,11 @@ docker-compose -f docker-compose.yaml up Once deployed, the Grafana service will send its logs to Loki. -> **Note**: stack name and service name for each swarm service and project name -> and service name for each compose service are automatically discovered and -> sent as Loki labels, this way you can filter by them in Grafana. +{{% admonition type="note" %}} +stack name and service name for each swarm service and project name +and service name for each compose service are automatically discovered and +sent as Loki labels, this way you can filter by them in Grafana. +{{% /admonition %}} ## Labels @@ -144,7 +150,11 @@ services: - "3000:3000" ``` -> Note the `loki-pipeline-stages: |` allowing to keep the indentation correct. +{{% admonition type="note" %}} + +The `loki-pipeline-stages: |` keeps the indentation correct. + +{{% /admonition %}} When using docker run you can also pass the value via a string parameter like such: diff --git a/docs/sources/clients/promtail/logrotation/_index.md b/docs/sources/clients/promtail/logrotation/_index.md index db34011c9be8c..cc9b3cc2fb5e5 100644 --- a/docs/sources/clients/promtail/logrotation/_index.md +++ b/docs/sources/clients/promtail/logrotation/_index.md @@ -13,7 +13,9 @@ At any point in time, there may be three processes working on a log file as show 2. Tailer - A reader that reads log lines as they are appended, for example, agents like Promtail. 3. Log Rotator - A process that rotates the log file either based on time (for example, scheduled every day) or size (for example, a log file reached its maximum size). -> **NOTE:** Here `fd` defines a file descriptor. Once a file is open for read or write, The Operating System returns a unique file descriptor (usually an integer) per process, and all the operations like read and write are done over that file descriptor. In other words, once the file is opened successfully, the file descriptor matters more than the file name. +{{% admonition type="note" %}} +Here `fd` defines a file descriptor. Once a file is open for read or write, The Operating System returns a unique file descriptor (usually an integer) per process, and all the operations like read and write are done over that file descriptor. In other words, once the file is opened successfully, the file descriptor matters more than the file name. +{{% /admonition %}} One of the critical components here is the log rotator. Let's understand how it impacts other components like the appender and tailer. @@ -91,7 +93,9 @@ You can [configure](https://kubernetes.io/docs/concepts/cluster-administration/l Both should be part of the `kubelet` config. If you run a managed version of Kubernetes in Cloud, refer to your cloud provider documentation for configuring `kubelet`. Examples [GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/node-system-config#create), [AKS](https://learn.microsoft.com/en-us/azure/aks/custom-node-configuration#use-custom-node-configuration) and [EKS](https://eksctl.io/usage/customizing-the-kubelet/#customizing-kubelet-configuration). -> **NOTE:** Log rotation managed by `kubelet` supports only rename + create and doesn't support copy + truncate. +{{% admonition type="note" %}} +Log rotation managed by `kubelet` supports only rename + create and doesn't support copy + truncate. +{{% /admonition %}} If `kubelet` is not configured to manage the log rotation, then it's up to the Container Runtime Interface (CRI) the cluster uses. Alternatively, log rotation can be managed by the `logrotate` utility in the Kubernetes node itself. @@ -133,7 +137,9 @@ Example `/etc/docker/daemon.json`: If neither `kubelet` nor `CRI` is configured for rotating logs, then the `logrotate` utility can be used on the Kubernetes nodes as explained previously. -> **NOTE:** We recommend using kubelet for log rotation. +{{% admonition type="note" %}} +We recommend using kubelet for log rotation. +{{% /admonition %}} ## Configure Promtail diff --git a/docs/sources/operations/scalability.md b/docs/sources/operations/scalability.md index 3b7769da9e99c..18f2a64f6b862 100644 --- a/docs/sources/operations/scalability.md +++ b/docs/sources/operations/scalability.md @@ -65,7 +65,9 @@ this will result in far lower `ruler` resource usage because the majority of the The LogQL queries coming from the `ruler` will be executed against the given `query-frontend` service. Requests will be load-balanced across all `query-frontend` IPs if the `dns:///` prefix is used. -> **Note:** Queries that fail to execute are _not_ retried. +{{% admonition type="note" %}} +Queries that fail to execute are _not_ retried. +{{% /admonition %}} ### Limits & Observability diff --git a/docs/sources/operations/storage/logs-deletion.md b/docs/sources/operations/storage/logs-deletion.md index 9ac01792a4161..bef52c0a10007 100644 --- a/docs/sources/operations/storage/logs-deletion.md +++ b/docs/sources/operations/storage/logs-deletion.md @@ -21,7 +21,9 @@ Log entry deletion relies on configuration of the custom logs retention workflow Enable log entry deletion by setting `retention_enabled` to true in the compactor's configuration and setting and `deletion_mode` to `filter-only` or `filter-and-delete` in the runtime config. -> **Warning:** Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but not not want to enforce retention, configure the `retention_period` setting with a value of `0s`. +{{% admonition type="warning" %}} +Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but not not want to enforce retention, configure the `retention_period` setting with a value of `0s`. +{{% /admonition %}} Because it is a runtime configuration, `deletion_mode` can be set per-tenant, if desired. diff --git a/docs/sources/operations/storage/retention.md b/docs/sources/operations/storage/retention.md index 7c3dcf1a4d978..a68fc0c5d53d1 100644 --- a/docs/sources/operations/storage/retention.md +++ b/docs/sources/operations/storage/retention.md @@ -171,14 +171,18 @@ Alternatively, the `table-manager.retention-period` and provided retention period needs to be a duration represented as a string that can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.dev/github.com/prometheus/common/model#ParseDuration). Examples: `7d`, `1w`, `168h`. -> **WARNING**: The retention period must be a multiple of the index and chunks table +{{% admonition type="warning" %}} +The retention period must be a multiple of the index and chunks table `period`, configured in the [`period_config`]({{}}) block. See the [Table Manager]({{}}) documentation for more information. +{{% /admonition %}} -> **NOTE**: To avoid querying of data beyond the retention period, +{{% admonition type="note" %}} +To avoid querying of data beyond the retention period, `max_look_back_period` config in [`chunk_store_config`]({{}}) must be set to a value less than or equal to what is set in `table_manager.retention_period`. +{{% /admonition %}} When using S3 or GCS, the bucket storing the chunks needs to have the expiry policy set correctly. For more details check diff --git a/docs/sources/release-notes/cadence.md b/docs/sources/release-notes/cadence.md index 162e597b886be..83f52ebcbab69 100644 --- a/docs/sources/release-notes/cadence.md +++ b/docs/sources/release-notes/cadence.md @@ -15,10 +15,12 @@ naming scheme: `MAJOR`.`MINOR`.`PATCH`. - `MINOR` (roughly once a quarter): these releases include new features which generally do not break backwards-compatibility, but from time to time we might introduce _minor_ breaking changes, and we will specify these in our upgrade docs. - `PATCH` (roughly once or twice a month): these releases include bug & security fixes which do not break backwards-compatibility. -> **NOTE:** While our naming scheme resembles [Semantic Versioning](https://semver.org/), at this time we do not strictly follow its +{{% admonition type="note" %}} +While our naming scheme resembles [Semantic Versioning](https://semver.org/), at this time we do not strictly follow its guidelines to the letter. Our goal is to provide regular releases that are as stable as possible, and we take backwards-compatibility seriously. As with any software, always read the [release notes](/release-notes) and the [upgrade guide](/upgrading) whenever choosing a new version of Loki to install. +{{% /admonition %}} New releases are based of a [weekly release](#weekly-releases) which we have vetted for stability over a number of weeks.