Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus exporter convert instrumentation scope to otel_scope_info metric #3357

Merged

Conversation

fatsheep9146
Copy link
Contributor

@fatsheep9146 fatsheep9146 commented Oct 18, 2022

fix #3273
related to #3161

Since #3285 already made an implementation for add target_info metric to prometheus exporter. I think I can wait for this pr merged and add otel_scope_info metric on the basic of his work.

@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch 2 times, most recently from 4ae23e1 to 6310a13 Compare October 23, 2022 06:54
@codecov
Copy link

codecov bot commented Oct 23, 2022

Codecov Report

Merging #3357 (7efab63) into main (49b62ae) will increase coverage by 0.0%.
The diff coverage is 81.8%.

Additional details and impacted files

Impacted file tree graph

@@          Coverage Diff          @@
##            main   #3357   +/-   ##
=====================================
  Coverage   77.7%   77.8%           
=====================================
  Files        164     164           
  Lines      11457   11488   +31     
=====================================
+ Hits        8913    8942   +29     
- Misses      2342    2343    +1     
- Partials     202     203    +1     
Impacted Files Coverage Δ
exporters/prometheus/exporter.go 82.5% <79.4%> (+1.4%) ⬆️
exporters/prometheus/config.go 100.0% <100.0%> (ø)

@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch from 6310a13 to 653b28b Compare October 23, 2022 07:08
@fatsheep9146 fatsheep9146 marked this pull request as ready for review October 23, 2022 07:09
@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch from e856489 to 1de3ce2 Compare October 25, 2022 01:38
Copy link
Contributor

@dashpole dashpole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this does not fully fix #3161. I believe you also need to implement open-telemetry/opentelemetry-specification#2890

CHANGELOG.md Outdated Show resolved Hide resolved
@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch from 768cdae to 2abb67e Compare October 25, 2022 10:58
@fatsheep9146
Copy link
Contributor Author

fatsheep9146 commented Oct 25, 2022

Note that this does not fully fix #3161. I believe you also need to implement open-telemetry/opentelemetry-specification#2890

You are right, I mock a test as following.

If use two different meter to create two metric with same name and enable scope info, no errors happen

func TestMetricWithSameName(t *testing.T) {
	exporter, err := New()
	assert.NoError(t, err)

	provider := metric.NewMeterProvider(
		metric.WithReader(exporter),
	)

	httpCounter, err := provider.Meter("http").SyncInt64().Counter("error_count", instrument.WithUnit(unit.Dimensionless))
	assert.NoError(t, err)
	httpCounter.Add(context.TODO(), 1)

	sqlCounter, err := provider.Meter("sql").SyncInt64().Counter("error_count", instrument.WithUnit(unit.Dimensionless))
	assert.NoError(t, err)
	sqlCounter.Add(context.TODO(), 1)

	t.Logf("serving metrics at localhost:2223/metrics")
	http.Handle("/metrics", promhttp.Handler())
	err = http.ListenAndServe(":2223", nil)
	if err != nil {
		t.Fatalf("error serving http: %v", err)
		return
	}
}

curl metric port will produce following metric

# HELP error_count_ratio_total
# TYPE error_count_ratio_total counter
error_count_ratio_total{otel_scope_name="http",otel_scope_version=""} 1
error_count_ratio_total{otel_scope_name="sql",otel_scope_version=""} 1

But if I disable scope info like this

func TestMetricWithSameName(t *testing.T) {
	exporter, err := New(WithoutScopeInfo())
	... // following codes keep unchanged
}

the error happned

An error has occurred while serving metrics:

collected metric "error_count_ratio_total" { counter:<value:1 > } was collected before with the same name and label values

I think I can solve this problem in another pr? Is that ok ? @dashpole @MrAlias

@dashpole
Copy link
Contributor

dashpole commented Oct 25, 2022

If use two different meter to create two metric with same name and enable scope info, no errors happen

Can you try with a different description on each? I think that is why you aren't reproducing the issue.

Can you also try with different Counter vs UpDownCounter?

@fatsheep9146
Copy link
Contributor Author

fatsheep9146 commented Oct 26, 2022

Can you try with a different description on each? I think that is why you aren't reproducing the issue.

For two Counter of two different meter with same name, different Description, an error happend @dashpole

code

...
httpCounter, err := provider.Meter("http").
  SyncInt64().Counter(
  "error_count",
  instrument.WithUnit(unit.Dimensionless),
  instrument.WithDescription("http error count"))
  httpCounter.Add(context.TODO(), 1)

sqlCounter, err := provider.Meter("sql").
  SyncInt64().Counter(
  "error_count",
  instrument.WithUnit(unit.Dimensionless),
  instrument.WithDescription("sql error count"))
sqlCounter.Add(context.TODO(), 1)
 ...

error

collected metric error_count_ratio_total label:<name:"otel_scope_name" value:"sql" > label:<name:"otel_scope_version" value:"" > counter:<value:1 >  has help "sql error count" but should have "http error count"

@fatsheep9146
Copy link
Contributor Author

fatsheep9146 commented Oct 26, 2022

Can you also try with different Counter vs UpDownCounter?

For one Counter and another UpDownCounter of two different meter with same name, same description, no error happend @dashpole

code

httpCounter, err := provider.Meter("http").
  SyncInt64().Counter(
  "error_count",
  instrument.WithUnit(unit.Dimensionless))
httpCounter.Add(context.TODO(), 1)

sqlCounter, err := provider.Meter("sql").
  SyncInt64().UpDownCounter(
  "error_count",
  instrument.WithUnit(unit.Dimensionless))
sqlCounter.Add(context.TODO(), 1)

result

# HELP error_count_ratio
# TYPE error_count_ratio gauge
error_count_ratio{otel_scope_name="sql",otel_scope_version=""} 1
# HELP error_count_ratio_total
# TYPE error_count_ratio_total counter
error_count_ratio_total{otel_scope_name="http",otel_scope_version=""} 1
...

CHANGELOG.md Outdated Show resolved Hide resolved
exporters/prometheus/config.go Outdated Show resolved Hide resolved
exporters/prometheus/config.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter.go Outdated Show resolved Hide resolved
exporters/prometheus/exporter_test.go Show resolved Hide resolved
@MrAlias
Copy link
Contributor

MrAlias commented Oct 27, 2022

Note that this does not fully fix #3161. I believe you also need to implement open-telemetry/opentelemetry-specification#2890

You are right, I mock a test as following.

If use two different meter to create two metric with same name and enable scope info, no errors happen

func TestMetricWithSameName(t *testing.T) {
	exporter, err := New()
	assert.NoError(t, err)

	provider := metric.NewMeterProvider(
		metric.WithReader(exporter),
	)

	httpCounter, err := provider.Meter("http").SyncInt64().Counter("error_count", instrument.WithUnit(unit.Dimensionless))
	assert.NoError(t, err)
	httpCounter.Add(context.TODO(), 1)

	sqlCounter, err := provider.Meter("sql").SyncInt64().Counter("error_count", instrument.WithUnit(unit.Dimensionless))
	assert.NoError(t, err)
	sqlCounter.Add(context.TODO(), 1)

	t.Logf("serving metrics at localhost:2223/metrics")
	http.Handle("/metrics", promhttp.Handler())
	err = http.ListenAndServe(":2223", nil)
	if err != nil {
		t.Fatalf("error serving http: %v", err)
		return
	}
}

curl metric port will produce following metric

# HELP error_count_ratio_total
# TYPE error_count_ratio_total counter
error_count_ratio_total{otel_scope_name="http",otel_scope_version=""} 1
error_count_ratio_total{otel_scope_name="sql",otel_scope_version=""} 1

But if I disable scope info like this

func TestMetricWithSameName(t *testing.T) {
	exporter, err := New(WithoutScopeInfo())
	... // following codes keep unchanged
}

the error happned

An error has occurred while serving metrics:

collected metric "error_count_ratio_total" { counter:<value:1 > } was collected before with the same name and label values

I think I can solve this problem in another pr? Is that ok ? @dashpole @MrAlias

@dashpole what's the expected behavior here? Just drop the second attempt to collect the second instrument and log an error?

@MrAlias MrAlias added this to the Metric v0.34.0 milestone Oct 27, 2022
@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch from c5e22cf to c698aaf Compare October 30, 2022 13:28
…metric

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
@fatsheep9146 fatsheep9146 force-pushed the prometheus_exporter_add_scope_info branch from f20f64e to dbeea34 Compare November 1, 2022 05:41
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
@dashpole
Copy link
Contributor

dashpole commented Nov 1, 2022

what's the expected behavior here? Just drop the second attempt to collect the second instrument and log an error?

From open-telemetry/opentelemetry-specification#2890

Example approach

  • Keep a map of metric name -> Description when constructing a batch of metrics for a scrape.
  • Each time a new metric (with Description) is added, look up the name in the map:
    • If it isn't found, add it to the map
    • If it is found, and the desc is the same, do nothing
    • If it is found, and has conflicting type information, drop the metric + log a warning
    • If it is found, and has conflicting help or unit, use the stored help + unit instead of the provided one + log a warning.

Basically, we need the description of all metrics with the same name to match.

@MrAlias
Copy link
Contributor

MrAlias commented Nov 1, 2022

what's the expected behavior here? Just drop the second attempt to collect the second instrument and log an error?

From open-telemetry/opentelemetry-specification#2890

Example approach

  • Keep a map of metric name -> Description when constructing a batch of metrics for a scrape.

  • Each time a new metric (with Description) is added, look up the name in the map:

    • If it isn't found, add it to the map
    • If it is found, and the desc is the same, do nothing
    • If it is found, and has conflicting type information, drop the metric + log a warning
    • If it is found, and has conflicting help or unit, use the stored help + unit instead of the provided one + log a warning.

Basically, we need the description of all metrics with the same name to match.

👍

@dashpole are you good with merging this and then adding that tracking in another PR? That seems to make sense to me.

Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
Signed-off-by: Ziqi Zhao <zhaoziqi9146@gmail.com>
@dashpole
Copy link
Contributor

dashpole commented Nov 2, 2022

@dashpole are you good with merging this and then adding that tracking in another PR? That seems to make sense to me.

Yes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Add Instrumentation Scope and Version as info metric and label in Prometheus exporter
3 participants