You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
I see the following error in the Grafana log
{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"process_cpu_usage{namespace=\"spark-test\"}","t":"2022-02-28T09:07:46.63+0000"}
{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"sum(zeppelin_note_cache_hit_total{namespace=\"spark-test\"}) / (sum(zeppelin_note_cache_hit_total{namespace=\"spark-test\"}) + sum(zeppelin_note_cache_miss_total{namespace=\"spark-test\"}))","t":"2022-02-28T09:07:46.73+0000"}
{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"process_files_open_files{namespace=\"spark-test\"}","t":"2022-02-28T09:07:46.73+0000"}
What you expected to happen:
No erros
How to reproduce it (as minimally and precisely as possible):
It just happens when I open a Grafana dashboard.
Full logs to relevant components:
Logs
level=info ts=2022-02-28T09:04:26.161364912Z caller=client.go:55 msg="enabling client to server TLS"
level=info ts=2022-02-28T09:04:26.161683494Z caller=options.go:115 msg="TLS client using provided certificate pool"
level=info ts=2022-02-28T09:04:26.161707098Z caller=options.go:148 msg="TLS client authentication enabled"
level=info ts=2022-02-28T09:04:26.166055848Z caller=options.go:27 protocol=gRPC msg="disabled TLS, key and cert must be set to enable"
level=info ts=2022-02-28T09:04:26.166787409Z caller=query.go:695 msg="starting query node"
level=info ts=2022-02-28T09:04:26.166967193Z caller=intrumentation.go:48 msg="changing probe status" status=ready
level=info ts=2022-02-28T09:04:26.167250961Z caller=intrumentation.go:60 msg="changing probe status" status=healthy
level=info ts=2022-02-28T09:04:26.167328044Z caller=grpc.go:131 service=gRPC/server component=query msg="listening for serving gRPC" address=127.0.0.1:10901
level=info ts=2022-02-28T09:04:26.167341139Z caller=http.go:63 service=http/server component=query msg="listening for requests and metrics" address=127.0.0.1:9090
level=info ts=2022-02-28T09:04:26.16746526Z caller=tls_config.go:195 service=http/server component=query msg="TLS is disabled." http2=false
level=info ts=2022-02-28T09:04:31.189964822Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.12:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-0\"}"
level=info ts=2022-02-28T09:04:31.190044591Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.59:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-1\"}"
level=info ts=2022-02-28T09:04:31.190073174Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.62:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-1\"}"
level=info ts=2022-02-28T09:04:31.190095015Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.128.10.61:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-1\"}"
level=info ts=2022-02-28T09:04:31.190121093Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.9:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-0\"}"
level=info ts=2022-02-28T09:04:31.190148554Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.131.8.11:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-0\"}"
Thanos, Prometheus and Golang version used:
Thanos: quay.io/thanos/thanos:v0.24.0 - go1.16.12
Prometheus: 2.29.2 - go1.16.6
Grafana: grafana/grafana:8.4.2
What happened:
I see the following error in the Grafana log
What you expected to happen:
No erros
How to reproduce it (as minimally and precisely as possible):
It just happens when I open a Grafana dashboard.
Full logs to relevant components:
level=info ts=2022-02-28T09:04:26.161364912Z caller=client.go:55 msg="enabling client to server TLS" level=info ts=2022-02-28T09:04:26.161683494Z caller=options.go:115 msg="TLS client using provided certificate pool" level=info ts=2022-02-28T09:04:26.161707098Z caller=options.go:148 msg="TLS client authentication enabled" level=info ts=2022-02-28T09:04:26.166055848Z caller=options.go:27 protocol=gRPC msg="disabled TLS, key and cert must be set to enable" level=info ts=2022-02-28T09:04:26.166787409Z caller=query.go:695 msg="starting query node" level=info ts=2022-02-28T09:04:26.166967193Z caller=intrumentation.go:48 msg="changing probe status" status=ready level=info ts=2022-02-28T09:04:26.167250961Z caller=intrumentation.go:60 msg="changing probe status" status=healthy level=info ts=2022-02-28T09:04:26.167328044Z caller=grpc.go:131 service=gRPC/server component=query msg="listening for serving gRPC" address=127.0.0.1:10901 level=info ts=2022-02-28T09:04:26.167341139Z caller=http.go:63 service=http/server component=query msg="listening for requests and metrics" address=127.0.0.1:9090 level=info ts=2022-02-28T09:04:26.16746526Z caller=tls_config.go:195 service=http/server component=query msg="TLS is disabled." http2=false level=info ts=2022-02-28T09:04:31.189964822Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.12:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-0\"}" level=info ts=2022-02-28T09:04:31.190044591Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.59:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190073174Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.62:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-1\"}" level=info ts=2022-02-28T09:04:31.190095015Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.128.10.61:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190121093Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.9:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-0\"}" level=info ts=2022-02-28T09:04:31.190148554Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.131.8.11:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-0\"}"
Anything else we need to know:
I found VictoriaMetrics/VictoriaMetrics#2000, maybe it can help to fix the problem.
If you think this is a Grafana Issue, let me know.
The text was updated successfully, but these errors were encountered: