Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update production-ready Loki in docker-compose #6691

Merged
merged 6 commits into from
Jul 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
production/docker/.data
.cache
.git
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
#### Loki

##### Enhancements
* [6691](https://github.com/grafana/loki/pull/6691) **dannykopping**: Update production-ready Loki cluster in docker-compose
* [6317](https://github.com/grafana/loki/pull/6317) **dannykoping**: General: add cache usage statistics
* [6444](https://github.com/grafana/loki/pull/6444) **aminesnow** Add TLS config to query frontend.
* [6372](https://github.com/grafana/loki/pull/6372) **splitice**: Add support for numbers in JSON fields.
Expand Down
10 changes: 10 additions & 0 deletions docs/sources/upgrading/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,16 @@ If you want to run at most a single querier per node, set `$._config.querier.use

This value now defaults to 3100, so the Loki process doesn't require special privileges. Previously, it had been set to port 80, which is a privileged port. If you need Loki to listen on port 80, you can set it back to the previous default using `-server.http-listen-port=80`.

#### docker-compose setup has been updated

The docker-compose [setup](https://github.com/grafana/loki/blob/main/production/docker) has been updated to **v2.6.0** and includes many improvements.

Notable changes include:
- authentication (multi-tenancy) is **enabled** by default; you can disable it in `production/docker/config/loki.yaml` by setting `auth_enabled: false`
- storage is now using Minio instead of local filesystem
- move your current storage into `.data/minio` and it should work transparently
- log-generator was added - if you don't need it, simply remove the service from `docker-compose.yaml` or don't start the service

## 2.6.0

### Loki
Expand Down
1 change: 1 addition & 0 deletions production/docker/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
loki/
.data
97 changes: 72 additions & 25 deletions production/docker/README.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,84 @@
# Loki cluster using docker-compose
# Loki with docker-compose

To deploy a cluster of loki nodes on a local machine (as shown below), you could use the `docker-compose-ha-member.yaml` file.
You can use this `docker-compose` setup to run Docker for development or in production.

<img src="./docker-compose-ha-diagram.png" width="850">
## Features

Some features of the deployment:
- Running in [Simple Scalable Deployment](https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#simple-scalable-deployment-mode) mode with 3 replicas for `read` and `write` targets
- Memberlist for [consistent hash](https://grafana.com/docs/loki/latest/fundamentals/architecture/rings/) ring
- [Minio](https://min.io/) for S3-compatible storage for chunks & indexes
- nginx gateway which acts as a reverse-proxy to the read/write paths
- Promtail for logs
- An optional log-generator
- Multi-tenancy enabled (`docker` as the tenant ID)
- Configuration for interactive debugging (see [Debugging](#debugging) section below)
- Prometheus for metric collection

- Backend: 3 Loki servers enabled with distributor, ingester, querier module
- Together they form a cluster ring based on memberlist mechanism (if using consul/etcd, modules can be separate for further separate read/write workloads)
- Index data are stored and replicated through botldb-shipper
- Replication_factor=2: the receiving distributor sends log data to 2 ingesters based on consistent hashing
- Chunk storage is a shared directory mounted from the same host directory (to simulate S3 or gcs)
- Query are performed through the two query frontend servers
- An nginx gateway to route the write and read workloads from clients (Grafana, promtail)
## Diagram

1. Ensure you have the most up-to-date Docker container images:
The below diagram describes the various components of this deployment, and how data flows between them.

```bash
docker-compose pull
```
```mermaid
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fancy!

graph LR
Grafana --> |Query logs| nginx["nginx (port: 8080)"]
Promtail -->|Send logs| nginx

1. Run the stack on your local Docker:
nginx -.-> |read path| QueryFrontend["query-frontend"]
nginx -.-> |write path| Distributor

```bash
docker-compose -f ./docker-compose-ha-memberlist.yaml up
```
QueryFrontend -.-> Querier

1. When adding data source in the grafana dashboard, using `http://loki-gateway:3100` for the URL field.
subgraph LokiRead["loki -target=read"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we still run query-scheduler as different component?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, why not?

Querier["querier"]
end

1. To clean up
subgraph Minio["Minio Storage"]
Chunks
Indexes
end

```bash
docker-compose -f ./docker-compose-ha-memberlist.yaml down
```
subgraph LokiWrite["loki -target=write"]
Distributor["distributor"] -.-> Ingester["ingester"]
Ingester
end

Remove the data under `./loki`.
Querier --> |reads| Chunks & Indexes
Ingester --> |writes| Chunks & Indexes
```

## Getting Started

Simply run `docker-compose up` and all the components will start.

It'll take a few seconds for all the components to start up and register in the [ring](http://localhost:8080/ring). Once all instances are `ACTIVE`, Loki will start accepting reads and writes. All logs will be stored with the tenant ID `docker`.

All data will be stored in the `.data` directory.

The nginx gateway runs on port `8080` and you can access Loki through it.

Prometheus runs on port `9090`, and you can access all metrics from Loki & Promtail here.

Grafana runs on port `3000`, and there are Loki & Prometheus datasources enabled by default.

## Endpoints

- [`/ring`](http://localhost:8080/ring) - view all components registered in the hash ring
- [`/config`](http://localhost:8080/config) - view the configuration used by Loki
- [`/memberlist`](http://localhost:8080/memberlist) - view all components in the memberlist cluster
- [all other Loki API endpoints](https://grafana.com/docs/loki/latest/api/)

## Debugging

First, you'll need to build a Loki image that includes and runs [delve](https://github.com/go-delve/delve).

Run `make loki-debug-image` from the root of this project. Grab the image name from the output (it'll look like `grafana/loki:...-debug`) and replace the Loki images in `docker-compose.yaml`.

Next, view the `docker-compose.yaml` file and uncomment the sections related to debugging.

You can follow [this guide](https://blog.jetbrains.com/go/2020/05/06/debugging-a-go-application-inside-a-docker-container/) to enable debugging in GoLand, but the basic steps are:

1. Bind a host port to one of the Loki services
2. Add a _Go Remote_ debug configuration in GoLand and use that port
3. Run `docker-compose up`
4. Set a breakpoint and start the debug configuration
5. Build/debug something awesome :)
25 changes: 25 additions & 0 deletions production/docker/config/datasources.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
apiVersion: 1
datasources:
- access: proxy
basicAuth: false
jsonData:
httpHeaderName1: "X-Scope-OrgID"
secureJsonData:
httpHeaderValue1: "docker"
editable: true
isDefault: true
name: loki
type: loki
uid: loki
url: http://loki-gateway
version: 1

- access: proxy
basicAuth: false
editable: true
isDefault: false
name: prometheus
type: prometheus
uid: prometheus
url: http://prometheus:9090
version: 1
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
auth_enabled: false
auth_enabled: true

http_prefix:

Expand All @@ -9,64 +9,60 @@ server:
grpc_listen_port: 9095
log_level: info

common:
storage:
s3:
endpoint: minio:9000
insecure: true
bucketnames: loki-data
access_key_id: loki
secret_access_key: supersecret
s3forcepathstyle: true

memberlist:
join_members: ["loki-1", "loki-2", "loki-3"]
join_members: ["loki-read", "loki-write"]
dead_node_reclaim_time: 30s
gossip_to_dead_nodes_time: 15s
left_ingesters_timeout: 30s
bind_addr: ['0.0.0.0']
bind_port: 7946
gossip_interval: 2s

ingester:
lifecycler:
join_after: 60s
join_after: 10s
observe_period: 5s
ring:
replication_factor: 2
replication_factor: 3
kvstore:
store: memberlist
final_sleep: 0s
chunk_idle_period: 1h
chunk_idle_period: 1m
wal:
enabled: true
dir: /loki/wal
max_chunk_age: 1h
max_chunk_age: 1m
chunk_retain_period: 30s
chunk_encoding: snappy
chunk_target_size: 0
chunk_target_size: 1.572864e+06
chunk_block_size: 262144
# chunk_target_size: 1.572864e+06

# Only needed for global rate strategy
# distributor:
# ring:
# kvstore:
# store: memberlist
flush_op_timeout: 10s

schema_config:
configs:
- from: 2020-08-01
store: boltdb-shipper
object_store: filesystem
object_store: s3
schema: v11
index:
prefix: index_
period: 24h

storage_config:
boltdb_shipper:
# shared_store: s3
shared_store: filesystem
active_index_directory: /loki/index
cache_location: /loki/boltdb-cache

#aws:
# s3: s3://us-east-1/mybucket
# sse_encryption: true
# insecure: false
# s3forcepathstyle: true
filesystem:
directory: /loki/chunks
shared_store: s3
active_index_directory: /tmp/index
cache_location: /tmp/boltdb-cache


limits_config:
Expand All @@ -93,21 +89,17 @@ query_range:
parallelise_shardable_queries: true
cache_results: true

results_cache:
cache:
# We're going to use the in-process "FIFO" cache
enable_fifocache: true
fifocache:
size: 1024
validity: 24h

frontend:
log_queries_longer_than: 5s
compress_responses: true
max_outstanding_per_tenant: 2048

query_scheduler:
max_outstanding_requests_per_tenant: 1024

querier:
query_ingesters_within: 2h

compactor:
working_directory: /loki/compactor
shared_store: filesystem
working_directory: /tmp/compactor
shared_store: s3
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ events {
}

http {

default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
Expand All @@ -16,16 +15,17 @@ http {
sendfile on;
tcp_nopush on;

upstream distributor {
server loki-1:3100;
server loki-2:3100;
server loki-3:3100;
upstream read {
server loki-read:3100;
}

upstream write {
server loki-write:3100;
}

upstream querier {
server loki-1:3100;
server loki-2:3100;
server loki-3:3100;
upstream cluster {
server loki-read:3100;
server loki-write:3100;
}

upstream query-frontend {
Expand All @@ -34,18 +34,34 @@ http {

server {
listen 80;
proxy_set_header X-Scope-OrgID docker-ha;
listen 3100;

location = /loki/api/v1/push {
proxy_pass http://distributor$request_uri;
}

location = /ring {
proxy_pass http://distributor$request_uri;
proxy_pass http://cluster$request_uri;
}

location = /memberlist {
proxy_pass http://cluster$request_uri;
}

location = /config {
proxy_pass http://cluster$request_uri;
}

location = /metrics {
proxy_pass http://cluster$request_uri;
}

location = /ready {
proxy_pass http://cluster$request_uri;
}

location = /loki/api/v1/push {
proxy_pass http://write$request_uri;
}

location = /loki/api/v1/tail {
proxy_pass http://querier$request_uri;
proxy_pass http://read$request_uri;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Expand All @@ -54,14 +70,4 @@ http {
proxy_pass http://query-frontend$request_uri;
}
}

server {
listen 3100;
proxy_set_header X-Scope-OrgID docker-ha;

location ~ /loki/api/.* {
proxy_pass http://querier$request_uri;
}

}
}
Loading