You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Prometheus exporter inside the collector responds to the scrape request like this:
# HELP kestrel_connection_duration_seconds The duration of connections on the server.
# TYPE kestrel_connection_duration_seconds histogram
kestrel_connection_duration_seconds_bucket{job="OpenTelemetryDemo",le="5000.0"} 3 1.697141781462e+09 # 5.9060365 1.6971417814410365e+09
kestrel_connection_duration_seconds_bucket{job="OpenTelemetryDemo",le="+Inf"} 3 1.697141781462e+09
kestrel_connection_duration_seconds_sum{job="OpenTelemetryDemo"} 11.745288200000001 1.697141781462e+09
kestrel_connection_duration_seconds_count{job="OpenTelemetryDemo"} 3 1.697141781462e+09
Hello @CodeBlanch, I was able to look into this and it looks like it's actually an issue with the prometheus/client_golang package. It looks like you're hitting a frequency of prometheus/client_golang#1333. The code path looks different, but investigation shows the same method causing an issue. Let me know if anything here doesn't make sense, happy to help. Sorry for another redirection!
On the OpenTelemetry Collector side of things, here's the code path:
Inside convertDoubleHistogram:
// Prometheus here is: "github.com/prometheus/client_golang/prometheus"
if len(exemplars) > 0 {
// The exemplars currently has labels set to be an empty map
m, err = prometheus.NewMetricWithExemplars(m, exemplars...)
if err != nil {
return nil, err
}
}
func newExemplar(value float64, ts time.Time, l Labels) (*dto.Exemplar, error) {
...
labelPairs := make([]*dto.LabelPair, 0, len(l))
...
// This loop is skipped since there are no labels
for name, value := range l {
...
}
e.Label = labelPairs
Component(s)
What happened?
[Originally reported here: https://github.com/prometheus/prometheus/issues/12975]
Description
I sent a histogram to OpenTelemetry Collector which has an exemplar which does NOT have a TraceId or SpanId:
The Prometheus exporter inside the collector responds to the scrape request like this:
This line...
kestrel_connection_duration_seconds_bucket{job="OpenTelemetryDemo",le="5000.0"} 3 1.697141781462e+09 # 5.9060365 1.6971417814410365e+09
...blows up Prometheus scraper:
Collector version
37e7f494a600
Additional context
The prometheus team looked at this (prometheus/prometheus#12975 (comment)) and said...
kestrel_connection_duration_seconds_bucket{job="OpenTelemetryDemo",le="5000.0"} 3 1.697141781462e+09 # 5.9060365 1.6971417814410365e+09
...should write out empty label set (
{}
)...kestrel_connection_duration_seconds_bucket{job="OpenTelemetryDemo",le="5000.0"} 3 1.697141781462e+09 # {} 5.9060365 1.6971417814410365e+09
The text was updated successfully, but these errors were encountered: