Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-2.2.1 #3577

Closed
wants to merge 9 commits into from
279 changes: 279 additions & 0 deletions CHANGELOG.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions docs/sources/clients/promtail/stages/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ Action stages:
- [timestamp](timestamp/): Set the timestamp value for the log entry.
- [output](output/): Set the log line text.
- [labeldrop](labeldrop/): Drop label set for the log entry.
- [labelallow](labelallow/): Allow label set for the log entry.
- [labels](labels/): Update the label set for the log entry.
- [metrics](metrics/): Calculate metrics based on extracted data.
- [tenant](tenant/): Set the tenant ID value to use for the log entry.
Expand Down
41 changes: 41 additions & 0 deletions docs/sources/clients/promtail/stages/labelallow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: labelallow
---
# `labelallow` stage

The labelallow stage is an action stage that allows only the provided labels
to be included in the label set that is sent to Loki with the log entry.

## Schema

```yaml
labelallow:
- [<string>]
...
```

### Examples

For the given pipeline:

```yaml
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
- labelallow:
- kubernetes_pod_name
- kubernetes_container_name
```

Given the following incoming labels:

- `kubernetes_pod_name`: `"loki-pqrs"`
- `kubernetes_container_name`: `"loki"`
- `kubernetes_pod_template_hash`: `"79f5db67b"`
- `kubernetes_controller_revision_hash`: `"774858987d"`

Only the below labels would be sent to `loki`

- `kubernetes_pod_name`: `"loki-pqrs"`
- `contaikubernetes_container_namener`: `"loki"`
2 changes: 1 addition & 1 deletion docs/sources/clients/promtail/stages/labeldrop.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: labeldrop
---
# `labeldrop` stage

The labeldrop stage is an action stage that takes drops labels from
The labeldrop stage is an action stage that drops labels from
the label set that is sent to Loki with the log entry.

## Schema
Expand Down
18 changes: 9 additions & 9 deletions docs/sources/installation/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ For production, we recommend installing with Tanka or Helm.
Copy and paste the commands below into your command line.

```bash
wget https://raw.githubusercontent.com/grafana/loki/v2.1.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
docker run -v $(pwd):/mnt/config -p 3100:3100 grafana/loki:2.1.0 -config.file=/mnt/config/loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.1.0/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
docker run -v $(pwd):/mnt/config -v /var/log:/var/log grafana/promtail:2.1.0 -config.file=/mnt/config/promtail-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/loki/loki-local-config.yaml -O loki-config.yaml
docker run -v $(pwd):/mnt/config -p 3100:3100 grafana/loki:2.2.1 -config.file=/mnt/config/loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
docker run -v $(pwd):/mnt/config -v /var/log:/var/log grafana/promtail:2.2.1 -config.file=/mnt/config/promtail-config.yaml
```

When finished, `loki-config.yaml` and `promtail-config.yaml` are downloaded in the directory you chose. Docker containers are running Loki and Promtail using those config files.
Expand All @@ -36,10 +36,10 @@ Copy and paste the commands below into your terminal. Note that you will need to

```bash
cd "<local-path>"
wget https://raw.githubusercontent.com/grafana/loki/v2.1.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
docker run -v <local-path>:/mnt/config -p 3100:3100 grafana/loki:2.1.0 --config.file=/mnt/config/loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.1.0/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
docker run -v <local-path>:/mnt/config -v /var/log:/var/log grafana/promtail:2.1.0 --config.file=/mnt/config/promtail-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/loki/loki-local-config.yaml -O loki-config.yaml
docker run -v <local-path>:/mnt/config -p 3100:3100 grafana/loki:2.2.1 --config.file=/mnt/config/loki-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
docker run -v <local-path>:/mnt/config -v /var/log:/var/log grafana/promtail:2.2.1 --config.file=/mnt/config/promtail-config.yaml
```

When finished, `loki-config.yaml` and `promtail-config.yaml` are downloaded in the directory you chose. Docker containers are running Loki and Promtail using those config files.
Expand All @@ -51,6 +51,6 @@ Navigate to http://localhost:3100/metrics to view the output.
Run the following commands in your command line. They work for Windows or Linux systems.

```bash
wget https://raw.githubusercontent.com/grafana/loki/v2.1.0/production/docker-compose.yaml -O docker-compose.yaml
wget https://raw.githubusercontent.com/grafana/loki/v2.2.1/production/docker-compose.yaml -O docker-compose.yaml
docker-compose -f docker-compose.yaml up
```
47 changes: 43 additions & 4 deletions docs/sources/upgrading/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,51 @@ If possible try to stay current and do sequential updates. If you want to skip v

-_add changes here which are unreleased_

### Promtail config changes
## 2.2.1

In [this PR](https://github.com/grafana/loki/pull/3404), we reverted a bug that caused `scrape_configs` entries without a
`pipeline_stages` definition to default to the `docker` pipeline stage.
Review the notes for 2.2.0, there are no additional update notes.

If any of your `scrape_configs` are missing this definition, you should add the following to maintain this behaviour:
## 2.2.0

### Loki

**Be sure to upgrade to 2.0 or 2.1 BEFORE upgrading to 2.2**

In Loki 2.2 we changed the internal version of our chunk format from v2 to v3, this is a transparent change and is only relevant if you every try to _downgrade_ a Loki installation. We incorporated the code to read v3 chunks in 2.0.1 and 2.1, as well as 2.2 and any future releases.

**If you upgrade to 2.2+ any chunks created can only be read by 2.0.1, 2.1 and 2.2+**

This makes it important to first upgrade to 2.0, 2.0.1, or 2.1 **before** upgrading to 2.2 so that if you need to rollback for any reason you can do so easily.

**Note:** 2.0 and 2.0.1 are identical in every aspect except 2.0.1 contains the code necessary to read the v3 chunk format. Therefor if you are on 2.0 and ugrade to 2.2, if you want to rollback, you must rollback to 2.0.1.

### Loki Config

**Read this if you use the query-frontend and have `sharded_queries_enabled: true`**

We discovered query scheduling related to sharded queries over long time ranges could lead to unfair work scheduling by one single query in the per tenant work queue.

The `max_query_parallelism` setting is designed to limit how many split and sharded units of 'work' for a single query are allowed to be put into the per tenant work queue at one time. The previous behavior would split the query by time using the `split_queries_by_interval` and compare this value to `max_query_parallelism` when filling the queue, however with sharding enabled, every split was then sharded into 16 additional units of work after the `max_query_parallelism` limit was applied.

In 2.2 we changed this behavior to apply the `max_query_parallelism` after splitting _and_ sharding a query resulting a more fair and expected queue scheduling per query.

**What this means** Loki will be putting much less work into the work queue per query if you are using the query frontend and have sharding_queries_enabled (which you should). **You may need to increase your `max_query_parallelism` setting if you are noticing slower query performance** In practice, you may not see a difference unless you were running a cluster with a LOT of queriers or queriers with a very high `parallelism` frontend_worker setting.

You could consider multiplying your current `max_query_parallelism` setting by 16 to obtain the previous behavior, though in practice we suspect few people would really want it this high unless you have a significant querier worker pool.

**Also be aware to make sure `max_outsdanting_per_tenant` is always greater than `max_query_parallelism` or large queries will automatically fail with a 429 back to the user.**



### Promtail

For 2.0 we eliminated the long deprecated `entry_parser` configuration in Promtail configs, however in doing so we introduced a very confusing and erroneous default behavior:

If you did not specify a `pipeline_stages` entry you would be provided with a default which included the `docker` pipeline stage. This can lead to some very confusing results.

In [3404](https://github.com/grafana/loki/pull/3404), we corrected this behavior

**If you are using docker, and any of your `scrape_configs` are missing a `pipeline_stages` definition**, you should add the following to obtain the correct behaviour:

```yaml
pipeline_stages:
Expand Down
8 changes: 1 addition & 7 deletions pkg/logcli/query/query.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ type Query struct {

// DoQuery executes the query and prints out the results
func (q *Query) DoQuery(c client.Client, out output.LogOutput, statistics bool) {

if q.LocalConfig != "" {
if err := q.DoLocalQuery(out, statistics, c.GetOrgID()); err != nil {
log.Fatalf("Query failed: %+v", err)
Expand Down Expand Up @@ -149,7 +148,6 @@ func (q *Query) DoQuery(c client.Client, out output.LogOutput, statistics bool)

}
}

}

func (q *Query) printResult(value loghttp.ResultValue, out output.LogOutput, lastEntry []*loghttp.Entry) (int, []*loghttp.Entry) {
Expand All @@ -172,7 +170,6 @@ func (q *Query) printResult(value loghttp.ResultValue, out output.LogOutput, las

// DoLocalQuery executes the query against the local store using a Loki configuration file.
func (q *Query) DoLocalQuery(out output.LogOutput, statistics bool, orgID string) error {

var conf loki.Config
conf.RegisterFlags(flag.CommandLine)
if q.LocalConfig == "" {
Expand Down Expand Up @@ -255,7 +252,7 @@ func (q *Query) SetInstant(time time.Time) {
}

func (q *Query) isInstant() bool {
return q.Start == q.End
return q.Start == q.End && q.Step == 0
}

func (q *Query) printStream(streams loghttp.Streams, out output.LogOutput, lastEntry []*loghttp.Entry) (int, []*loghttp.Entry) {
Expand Down Expand Up @@ -369,7 +366,6 @@ func (q *Query) printMatrix(matrix loghttp.Matrix) {
// it gives us more flexibility with regard to output types in the future. initially we are supporting just formatted json but eventually
// we might add output options such as render to an image file on disk
bytes, err := json.MarshalIndent(matrix, "", " ")

if err != nil {
log.Fatalf("Error marshalling matrix: %v", err)
}
Expand All @@ -379,7 +375,6 @@ func (q *Query) printMatrix(matrix loghttp.Matrix) {

func (q *Query) printVector(vector loghttp.Vector) {
bytes, err := json.MarshalIndent(vector, "", " ")

if err != nil {
log.Fatalf("Error marshalling vector: %v", err)
}
Expand All @@ -389,7 +384,6 @@ func (q *Query) printVector(vector loghttp.Vector) {

func (q *Query) printScalar(scalar loghttp.Scalar) {
bytes, err := json.MarshalIndent(scalar, "", " ")

if err != nil {
log.Fatalf("Error marshalling scalar: %v", err)
}
Expand Down
65 changes: 65 additions & 0 deletions pkg/logentry/stages/labelallow.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
package stages

import (
"time"

"github.com/mitchellh/mapstructure"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
)

const (
// ErrEmptyLabelAllowStageConfig error returned if config is empty
ErrEmptyLabelAllowStageConfig = "labelallow stage config cannot be empty"
)

// labelallowConfig is a slice of labels to be included
type LabelAllowConfig []string

func validateLabelAllowConfig(c LabelAllowConfig) error {
if c == nil || len(c) < 1 {
return errors.New(ErrEmptyLabelAllowStageConfig)
}

return nil
}

func newLabelAllowStage(configs interface{}) (Stage, error) {
cfgs := &LabelAllowConfig{}
err := mapstructure.Decode(configs, cfgs)
if err != nil {
return nil, err
}

err = validateLabelAllowConfig(*cfgs)
if err != nil {
return nil, err
}

labelMap := make(map[string]struct{})
for _, label := range *cfgs {
labelMap[label] = struct{}{}
}

return toStage(&labelAllowStage{
labels: labelMap,
}), nil
}

type labelAllowStage struct {
labels map[string]struct{}
}

// Process implements Stage
func (l *labelAllowStage) Process(labels model.LabelSet, extracted map[string]interface{}, t *time.Time, entry *string) {
for label := range labels {
if _, ok := l.labels[string(label)]; !ok {
delete(labels, label)
}
}
}

// Name implements Stage
func (l *labelAllowStage) Name() string {
return StageTypeLabelAllow
}
72 changes: 72 additions & 0 deletions pkg/logentry/stages/labelallow_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
package stages

import (
"testing"
"time"

util_log "github.com/cortexproject/cortex/pkg/util/log"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
ww "github.com/weaveworks/common/server"
)

func Test_addLabelStage_Process(t *testing.T) {
// Enable debug logging
cfg := &ww.Config{}
require.Nil(t, cfg.LogLevel.Set("debug"))
util_log.InitLogger(cfg)
Debug = true

tests := []struct {
name string
config *LabelAllowConfig
inputLabels model.LabelSet
expectedLabels model.LabelSet
}{
{
name: "allow single label",
config: &LabelAllowConfig{"testLabel1"},
inputLabels: model.LabelSet{
"testLabel1": "testValue",
"testLabel2": "testValue",
},
expectedLabels: model.LabelSet{
"testLabel1": "testValue",
},
},
{
name: "allow multiple labels",
config: &LabelAllowConfig{"testLabel1", "testLabel2"},
inputLabels: model.LabelSet{
"testLabel1": "testValue",
"testLabel2": "testValue",
"testLabel3": "testValue",
},
expectedLabels: model.LabelSet{
"testLabel1": "testValue",
"testLabel2": "testValue",
},
},
{
name: "allow non-existing label",
config: &LabelAllowConfig{"foobar"},
inputLabels: model.LabelSet{
"testLabel1": "testValue",
"testLabel2": "testValue",
},
expectedLabels: model.LabelSet{},
},
}

for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
st, err := newLabelAllowStage(test.config)
if err != nil {
t.Fatal(err)
}
out := processEntries(st, newEntry(nil, test.inputLabels, "", time.Now()))[0]
assert.Equal(t, test.expectedLabels, out.Labels)
})
}
}
Loading