Skip to content

Commit

Permalink
Add another use-case - otel demo that pushes metrics + support also .…
Browse files Browse the repository at this point in the history
… as a metric chunks separator

Signed-off-by: Jirka Kremser <jiri.kremser@gmail.com>
  • Loading branch information
jkremser committed Oct 24, 2024
1 parent 04629b0 commit 426e83a
Show file tree
Hide file tree
Showing 9 changed files with 206 additions and 9 deletions.
2 changes: 1 addition & 1 deletion examples/metric-pull/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ open http://localhost:8181/metrics

Install this addon:
```bash
helm upgrade -i kedify-otel kedify-otel/otel-add-on --version=v0.0.1-0 -f collector-pull-values.yaml
helm upgrade -i kedify-otel kedify-otel/otel-add-on --version=v0.0.1-0 -f scaler-with-collector-pull-values.yaml
```

Note the following section in the helm chart values that configures the OTEL collector to scrape targets:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ opentelemetry-collector:
- 'watch'
config:
receivers:
opencensus: null
opencensus:
# https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md
prometheus:
config:
Expand All @@ -48,9 +48,9 @@ opentelemetry-collector:
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
zipkin: null
jaeger: null
otlp: null
zipkin:
jaeger:
otlp:
exporters:
otlp:
endpoint: keda-otel-scaler:4317
Expand All @@ -72,8 +72,8 @@ opentelemetry-collector:

service:
pipelines:
traces: null
logs: null
traces:
logs:
metrics:
receivers:
- prometheus
Expand Down
89 changes: 89 additions & 0 deletions examples/metric-push/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Use-case: pull metrics

This use-case demonstrates how OTEL collector can be used as a scraper of another metric endpoints and
then forwarding the filtered metrics into OTLP receiver in our scaler.

Prepare helm chart repos:

```bash
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo add kedify https://kedify.github.io/charts
helm repo add kedify-otel https://kedify.github.io/otel-add-on
helm repo update
```

Any Kubernetes cluster will do:
```bash
k3d cluster create metric-push -p "8080:31198@server:0"
```

Install demo app
- architecture: https://opentelemetry.io/docs/demo/architecture/
- helm chart: https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-demo

```bash
helm upgrade -i my-otel-demo open-telemetry/opentelemetry-demo -f opentelemetry-demo-values.yaml
# check if the app is running
open http://localhost:8080
```

Install this addon:
```bash
helm upgrade -i kedify-otel kedify-otel/otel-add-on --version=v0.0.1-0 -f scaler-only-push-values.yaml
```

In this scenario, we don't install OTEL collector using the `kedify-otel/otel-add-on` helm chart, because
the `opentelemetry-demo` already creates one and it was configured to forward all metrics to our scaler.
If we wanted to filter the metrics, we would need to deploy another OTEL collector and configure the processor
there so that it would look like this:

```bash
┌────────────┐ ┌────────────┐ ┌─────────────┐
│ │ │ │ │ │
│ OTEL col 1 ├────►│ OTEL col 2 ├────►│ this scaler │
│ │ │ (filtering)│ │ │
└────────────┘ └────────────┘ └─────────────┘

instead we go w/ simple (w/o filtering):
┌────────────┐ ┌─────────────┐
│ │ │ │
│ OTEL col 1 ├────►│ this scaler │
│ │ │ │
└────────────┘ └─────────────┘
```

Install KEDA by Kedify.io:
```bash
helm upgrade -i keda kedify/keda --namespace keda --create-namespace
```

We will be scaling two microservices for this application, first let's check what metrics are there in shipped
[grafana](http://localhost:8080/grafana/explore?schemaVersion=1&panes=%7B%222n3%22:%7B%22datasource%22:%22webstore-metrics%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22app_frontend_requests_total%22,%22range%22:true,%22instant%22:true,%22datasource%22:%7B%22type%22:%22prometheus%22,%22uid%22:%22webstore-metrics%22%7D,%22editorMode%22:%22code%22,%22legendFormat%22:%22__auto%22,%22useBackend%22:false,%22disableTextWrap%22:false,%22fullMetaSearch%22:false,%22includeNullMetadata%22:true%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&orgId=1).

We will use following two metrics for scaling microservices `recommendationservice` and `productcatalogservice`.
```bash
...
app_frontend_requests_total{instance="0b38958c-f169-4a83-9adb-cf2c2830d61e", job="opentelemetry-demo/frontend", method="GET", status="200", target="/api/recommendations"}
1824
app_frontend_requests_total{instance="0b38958c-f169-4a83-9adb-cf2c2830d61e", job="opentelemetry-demo/frontend", method="GET", status="200", target="/api/products"}
1027
...
```

Create `ScaledObject`s:
```bash
kubectl apply -f sos.yaml
```

The demo application contains a load generator that can be further tweaked on http://localhost:8080/loadgen endpoint and by
default, creates a lot of traffic in the eshop. So there is no need to create further load from our side and we can just
observe the effects of autoscaling:

```bash
watch kubectl get deploy my-otel-demo-recommendationservice my-otel-demo-productcatalogservice
```

Once finished, clean the cluster:
```bash
k3d cluster delete metric-push
```
24 changes: 24 additions & 0 deletions examples/metric-push/opentelemetry-demo-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-demo/values.yaml
components:
frontendProxy:
service:
type: NodePort
nodePort: 31198
opentelemetry-collector:
config:
exporters:
# this is the original exporter, we just rename it from otlp to otlp/jaeger to preserve it
otlp/jaeger:
endpoint: '{{ include "otel-demo.name" . }}-jaeger-collector:4317'
tls:
insecure: true
otlp/keda:
endpoint: keda-otel-scaler:4317
tls:
insecure: true
service:
pipelines:
traces:
exporters: [otlp/jaeger, debug, spanmetrics]
metrics:
exporters: [ otlp/keda, otlphttp/prometheus, debug ]
8 changes: 8 additions & 0 deletions examples/metric-push/scaler-only-push-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
settings:
metricStoreRetentionSeconds: 60
logs:
logLvl: debug

# otel collector will be installed from another helm chart
opentelemetry-collector:
enabled: false
47 changes: 47 additions & 0 deletions examples/metric-push/sos.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: recommendationservice
spec:
scaleTargetRef:
name: my-otel-demo-recommendationservice
triggers:
- type: external
metadata:
scalerAddress: "keda-otel-scaler.default.svc:4318"
metricQuery: "avg(app_frontend_requests{target=/api/recommendations, method=GET, status=200})"
operationOverTime: "rate"
targetValue: "10"
clampMax: "600"
minReplicaCount: 1
advanced:
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 10
scaleUp:
stabilizationWindowSeconds: 10
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: my-otel-demo-productcatalogservice
spec:
scaleTargetRef:
name: podinfo
triggers:
- type: external
metadata:
scalerAddress: "keda-otel-scaler.default.svc:4318"
metricQuery: "avg(app_frontend_requests{target=/api/products, method=GET, status=200})"
operationOverTime: "rate"
targetValue: "10"
clampMax: "600"
minReplicaCount: 1
advanced:
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
stabilizationWindowSeconds: 10
scaleUp:
stabilizationWindowSeconds: 10
1 change: 0 additions & 1 deletion helmchart/otel-add-on/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,6 @@ opentelemetry-collector:
compression: "none"
tls:
insecure: true
# endpoint: otel-add-on:4317
# tls:
# cert_file: file.cert
# key_file: file.key
Expand Down
2 changes: 1 addition & 1 deletion metric/mem_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ func (m ms) Put(entry types.NewMetricEntry) {
}

func escapeName(name types.MetricName) types.MetricName {
return types.MetricName(strings.ReplaceAll(string(name), "/", "_"))
return types.MetricName(strings.ReplaceAll(strings.ReplaceAll(string(name), "/", "_"), ".", "_"))
}

func timestampToSeconds(timestamp pcommon.Timestamp) uint32 {
Expand Down
30 changes: 30 additions & 0 deletions metric/mem_store_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,36 @@ func TestMemStorePutOneAndGetOne(t *testing.T) {
assertMetricFound(t, val, found, err, 42.)
}

func TestMemStoreEscapeMetrics(t *testing.T) {
// setup
ms := NewMetricStore(5)
ms.Put(types.NewMetricEntry{
Name: "metric/one",
MeasurementTime: pcommon.Timestamp(time.Now().Unix()),
MeasurementValue: 42.,
Labels: map[string]any{
"a": "1",
"b": "2",
},
})
ms.Put(types.NewMetricEntry{
Name: "metric.two",
MeasurementTime: pcommon.Timestamp(time.Now().Unix()),
MeasurementValue: 43.,
Labels: map[string]any{
"a": "2",
},
})

// checks
val1, found1, err1 := ms.Get("metric_one", map[string]any{"b": "2", "a": "1"}, types.OpLastOne, types.VecSum)
assertMetricFound(t, val1, found1, err1, 42.)
val2, found2, err2 := ms.Get("metric.one", map[string]any{"b": "2", "a": "1"}, types.OpLastOne, types.VecSum)
assertMetricFound(t, val2, found2, err2, 42.)
val3, found3, err3 := ms.Get("metric_two", map[string]any{"a": "2"}, types.OpLastOne, types.VecSum)
assertMetricFound(t, val3, found3, err3, 43.)
}

func TestMemStoreErr(t *testing.T) {
// setup
ms := NewMetricStore(5)
Expand Down

0 comments on commit 426e83a

Please sign in to comment.