Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support meters with the same name but different set of tag keys in PrometheusMeterRegistry #877

Open
j0xaf opened this issue Sep 26, 2018 · 34 comments
Labels
registry: prometheus A Prometheus Registry related issue

Comments

@j0xaf
Copy link

j0xaf commented Sep 26, 2018

In connection with PrometheusMeterRegistry and the new KafkaConsumerMetrics (thanks to @jkschneider) I have a problem that metrics won´t register correctly because of PrometheusMeterRegistry complaining about metrics with same name but different set of key tags. It claims that metrics of same name are required in prometheus to have same set of tags.

I believe the error thrown at https://github.com/micrometer-metrics/micrometer/blob/master/implementations/micrometer-registry-prometheus/src/main/java/io/micrometer/prometheus/PrometheusMeterRegistry.java#L357 is wrong. Prometheus perfectly accepts metrics of same name with disjunct set of tags (the metrics name itself is just a tag __name__). We have in our prometheus production instance metrics like up{application="xyz", job="abc"} and up{application="abc", job="123", host="prometheus"} coexisting without any issue whatsoever.

Example of exception text, stacktrace below:

Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter containing tag keys [application, client_id]. The meter you are attempting to register has keys [application, client_id, topic].

Context of variables

id = MeterId{name='kafka.consumer.records.lag.max', tags=[ImmutableTag{key='application', value='heimdall'}, ImmutableTag{key='client.id', value='consumer-1'}, ImmutableTag{key='topic', value='probe_t'}]}
existingCollector = MeterId{name='kafka.consumer.records.lag.max', tags=[ImmutableTag{key='application', value='heimdall'}, ImmutableTag{key='client.id', value='consumer-1'}]}

Stacktrace:

collectorByName:348, PrometheusMeterRegistry (io.micrometer.prometheus)
newGauge:225, PrometheusMeterRegistry (io.micrometer.prometheus)
lambda$gauge$1:244, MeterRegistry (io.micrometer.core.instrument)
apply:-1, 4552198 (io.micrometer.core.instrument.MeterRegistry$$Lambda$373)
lambda$registerMeterIfNecessary$5:514, MeterRegistry (io.micrometer.core.instrument)
apply:-1, 491928400 (io.micrometer.core.instrument.MeterRegistry$$Lambda$338)
getOrCreateMeter:563, MeterRegistry (io.micrometer.core.instrument)
registerMeterIfNecessary:525, MeterRegistry (io.micrometer.core.instrument)
registerMeterIfNecessary:514, MeterRegistry (io.micrometer.core.instrument)
gauge:244, MeterRegistry (io.micrometer.core.instrument)
register:128, Gauge$Builder (io.micrometer.core.instrument)
registerGaugeForObject:149, KafkaConsumerMetrics (io.micrometer.core.instrument.binder.kafka)
registerGaugeForObject:153, KafkaConsumerMetrics (io.micrometer.core.instrument.binder.kafka)
lambda$bindTo$0:79, KafkaConsumerMetrics (io.micrometer.core.instrument.binder.kafka)
accept:-1, 1260340889 (io.micrometer.core.instrument.binder.kafka.KafkaConsumerMetrics$$Lambda$328)
lambda$registerMetricsEventually$11:205, KafkaConsumerMetrics (io.micrometer.core.instrument.binder.kafka)
handleNotification:-1, 820539250 (io.micrometer.core.instrument.binder.kafka.KafkaConsumerMetrics$$Lambda$329)
handleNotification:1754, DefaultMBeanServerInterceptor$ListenerWrapper (com.sun.jmx.interceptor)
handleNotification:275, NotificationBroadcasterSupport (javax.management)
run:352, NotificationBroadcasterSupport$SendNotifJob (javax.management)
execute:337, NotificationBroadcasterSupport$1 (javax.management)
sendNotification:248, NotificationBroadcasterSupport (javax.management)
sendNotification:209, MBeanServerDelegate (javax.management)
sendNotification:1498, DefaultMBeanServerInterceptor (com.sun.jmx.interceptor)
registerWithRepository:1911, DefaultMBeanServerInterceptor (com.sun.jmx.interceptor)
registerDynamicMBean:966, DefaultMBeanServerInterceptor (com.sun.jmx.interceptor)
registerObject:900, DefaultMBeanServerInterceptor (com.sun.jmx.interceptor)
registerMBean:324, DefaultMBeanServerInterceptor (com.sun.jmx.interceptor)
registerMBean:522, JmxMBeanServer (com.sun.jmx.mbeanserver)
reregister:167, JmxReporter (org.apache.kafka.common.metrics)
metricChange:85, JmxReporter (org.apache.kafka.common.metrics)
registerMetric:545, Metrics (org.apache.kafka.common.metrics)
add:256, Sensor (org.apache.kafka.common.metrics)
add:241, Sensor (org.apache.kafka.common.metrics)
recordTopicFetchMetrics:1291, Fetcher$FetchManagerMetrics (org.apache.kafka.clients.consumer.internals)
access$3200:1246, Fetcher$FetchManagerMetrics (org.apache.kafka.clients.consumer.internals)
record:1230, Fetcher$FetchResponseMetricAggregator (org.apache.kafka.clients.consumer.internals)
drain:982, Fetcher$PartitionRecords (org.apache.kafka.clients.consumer.internals)
nextFetchedRecord:1033, Fetcher$PartitionRecords (org.apache.kafka.clients.consumer.internals)
fetchRecords:1095, Fetcher$PartitionRecords (org.apache.kafka.clients.consumer.internals)
access$1200:949, Fetcher$PartitionRecords (org.apache.kafka.clients.consumer.internals)
fetchRecords:570, Fetcher (org.apache.kafka.clients.consumer.internals)
fetchedRecords:531, Fetcher (org.apache.kafka.clients.consumer.internals)
pollOnce:1178, KafkaConsumer (org.apache.kafka.clients.consumer)
poll:1111, KafkaConsumer (org.apache.kafka.clients.consumer)
run:699, KafkaMessageListenerContainer$ListenerConsumer (org.springframework.kafka.listener)
call:511, Executors$RunnableAdapter (java.util.concurrent)
run:266, FutureTask (java.util.concurrent)
run:748, Thread (java.lang)
@jkschneider
Copy link
Contributor

jkschneider commented Sep 27, 2018

This actually is enforced by the underlying Prometheus client that Micrometer is delegating to. It's possible for distinct processes to send metrics with different sets of tag keys, but Prometheus does not allow it from a single process.

All Micrometer is doing is giving you a more readable error message about it before the Prometheus client blows up.

@jkschneider jkschneider added the question A user question, probably better suited for StackOverflow label Sep 27, 2018
@flozano
Copy link

flozano commented Jan 23, 2019

So basically this is expected? Kafka metrics as configured by KafkaConsumerMetrics is incompatible with Prometheus-based registry? I'm getting

java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'kafka_consumer_records_lag_records' containing tag keys [client_id, topic]. The meter you are attempting to register has keys [client_id].

@izeye
Copy link
Contributor

izeye commented Feb 18, 2019

@flozano Try the latest version if you haven’t done yet as there were updates on the Kafka binder recently.

@jdbranham
Copy link

If the meter tags are different, could we unregister then register the meter with a combined list of tags?

Also, am I looking at the right prometheus client implementation?
https://github.com/prometheus/client_java/blob/master/simpleclient/src/main/java/io/prometheus/client/CollectorRegistry.java#L49

@cschuyle
Copy link

cschuyle commented Apr 1, 2020

Hi @jdbranham, have you been able to fix or work around this? I have a similar problem that PrometheusMeterRegistry is in conflict with something else, but instead of KafkaConsumerMetrics, it's because I'm using Spring Boot actuator.

If I annotate the controller method with just @timed() with no parameters, there is no error. However, if I use @timed with a name argument, like this:

    @Timed("some-operation")
    public void someOperation() {}

... then it errors when someOperation() is called. Hypothesis: both Actuator and Prometheus are trying to use the annotation to create a metric (each with their own set of tags), but Prometheus requires the name be unique.

Stack trace:

java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'some_operation_seconds' containing tag keys [class, exception, method]. The meter you are attempting to register has keys [exception, method, outcome, status, uri].
	at io.micrometer.prometheus.PrometheusMeterRegistry.lambda$collectorByName$9(PrometheusMeterRegistry.java:382) ~[micrometer-registry-prometheus-1.3.5.jar:1.3.5]
...
org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.getTimer(WebMvcMetricsFilter.java:185) ~[spring-boot-actuator-2.2.5.RELEASE.jar:2.2.5.RELEASE]

I've also tried to upgrade to a newer version of micrometer (1.4.1), but the result is the same.

@KeithWoods
Copy link

@flozano @izeye I think this issue still exists.

I think this is because some metrics exists at multiple levels with different tags, for example, kafka_consumer_fetch_manager_fetch_size_max. It exists at the client level but also at the topic/partition level. At the client level there is no associated topic tag, at the topic/partition there is. One of these 2 will get registered first, then the second will cause an exception as the tags list is different.

Here is above metric in jconsole:

kafkaConsumerMetricIssue

Sometimes this causes an uncaught exception originating here (note 1.5 branch) , other times it seems silent. Sometimes the metrics in question show up on my prometheus endpoint, sometimes they don't. This is a bit vague as I'm not fully understanding why, I'm guessing the one that wins the above race shows up and the other doesn't, and the order may differ depending on the order metrics get registered.

When I do see an exception, this is it:

Caused by: java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'kafka_consumer_fetch_manager_fetch_size_max' containing tag keys [client_id, kafka_version, spring_id, topic]. The meter you are attempting to register has keys [client_id, kafka_version, spring_id].
	at io.micrometer.prometheus.PrometheusMeterRegistry.lambda$applyToCollector$17(PrometheusMeterRegistry.java:429)
	at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1877)
	at io.micrometer.prometheus.PrometheusMeterRegistry.applyToCollector(PrometheusMeterRegistry.java:413)
	at io.micrometer.prometheus.PrometheusMeterRegistry.newGauge(PrometheusMeterRegistry.java:207)
	at io.micrometer.core.instrument.MeterRegistry.lambda$gauge$1(MeterRegistry.java:295)
	at io.micrometer.core.instrument.MeterRegistry.lambda$registerMeterIfNecessary$5(MeterRegistry.java:559)
	at io.micrometer.core.instrument.MeterRegistry.getOrCreateMeter(MeterRegistry.java:612)
	at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:566)
	at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:559)
	at io.micrometer.core.instrument.MeterRegistry.gauge(MeterRegistry.java:295)
	at io.micrometer.core.instrument.Gauge$Builder.register(Gauge.java:190)
	at io.micrometer.core.instrument.composite.CompositeGauge.registerNewMeter(CompositeGauge.java:58)
	at io.micrometer.core.instrument.composite.CompositeGauge.registerNewMeter(CompositeGauge.java:27)
	at io.micrometer.core.instrument.composite.AbstractCompositeMeter.add(AbstractCompositeMeter.java:66)
	at java.lang.Iterable.forEach(Iterable.java:75)
	at java.util.Collections$SetFromMap.forEach(Collections.java:5476)
	at io.micrometer.core.instrument.composite.CompositeMeterRegistry.lambda$new$0(CompositeMeterRegistry.java:65)
	at io.micrometer.core.instrument.composite.CompositeMeterRegistry.lock(CompositeMeterRegistry.java:184)
	at io.micrometer.core.instrument.composite.CompositeMeterRegistry.lambda$new$1(CompositeMeterRegistry.java:65)
	at io.micrometer.core.instrument.MeterRegistry.getOrCreateMeter(MeterRegistry.java:622)
	at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:566)
	at io.micrometer.core.instrument.MeterRegistry.registerMeterIfNecessary(MeterRegistry.java:559)
	at io.micrometer.core.instrument.MeterRegistry.gauge(MeterRegistry.java:295)
	at io.micrometer.core.instrument.Gauge$Builder.register(Gauge.java:190)
	at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.registerGauge(KafkaMetrics.java:182)
	at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.bindMeter(KafkaMetrics.java:175)
	at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.lambda$checkAndBindMetrics$1(KafkaMetrics.java:161)
	at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
	at java.util.Collections$UnmodifiableMap.forEach(Collections.java:1505)
	at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.checkAndBindMetrics(KafkaMetrics.java:137)
	at io.micrometer.core.instrument.binder.kafka.KafkaMetrics.bindTo(KafkaMetrics.java:93)
	at io.micrometer.core.instrument.binder.kafka.KafkaClientMetrics.bindTo(KafkaClientMetrics.java:39)
	at org.springframework.kafka.core.MicrometerConsumerListener.consumerAdded(MicrometerConsumerListener.java:74)
	at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:301)
	at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:271)
	at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:245)
	at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:219)
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:592)
	at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:294)
	at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:338)

I'm not sure what to do for now. As @jkschneider said it's not really a micrometer problem, it's more the metrics are not really comparable with prometheus, or perhaps when auto registered as they are now they are not compatible. If anyone has ideas let me know.

I'm using this via spring kafkas KafkaMessageListenerContainer which internally uses KafkaClientMetrics. I noticed there is a deprecated KafkaConsumerMetrics in the micrometer codebase which takes a more manual approach to registering metrics. I'll prob do something similar to register just the metrics I care about and see how that goes.

@KeithWoods
Copy link

A bit more information: After I looked to tweak KafkaMetrics this to solve my issue, I noticed it has some logic to remove and add existing metres if the tags are different (as was suggested in this thread), however it only does this if the clients are the same. In my above case the clients are different as i have multiple consumers with differing client ID, given that the remove/add logic wasn't run. The logic in question:
https://github.com/micrometer-metrics/micrometer/blob/1.5.x/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/kafka/KafkaMetrics.java#L152-L158

If I remove if (differentClient(tags)) break; everything appears is fine. Would like to understand the impact of this.

@fieldju
Copy link

fieldju commented Jun 29, 2020

Okay, I am running into this issue as well.

I copied and pasted the registry and modified the offending code so that it creates a new collector instead.

I have written a functional test that proves that #877 (comment) is no longer true.

https://github.com/armory-plugins/armory-observability-plugin/blob/2819e3f6b4e2c6a8ad88fbd01511a52281c4f957/common/src/test/java/io/armory/plugin/observability/prometheus/PrometheusScrapeControllerFunctionalTest.java#L87-L117

I have also written an integration test that proves that Prometheus can handle scraping metrics that have the same name but varying labels.

https://github.com/armory-plugins/armory-observability-plugin/blob/2819e3f6b4e2c6a8ad88fbd01511a52281c4f957/common/src/test/java/io/armory/plugin/observability/prometheus/PrometheusScrapeControllerIntegrationTest.java#L70-L169

I feel like we could easily replace

private void applyToCollector(Meter.Id id, Consumer<MicrometerCollector> consumer) {
collectorMap.compute(getConventionName(id), (name, existingCollector) -> {
if (existingCollector == null) {
MicrometerCollector micrometerCollector = new MicrometerCollector(id, config().namingConvention(), prometheusConfig);
consumer.accept(micrometerCollector);
return micrometerCollector.register(registry);
}
List<String> tagKeys = getConventionTags(id).stream().map(Tag::getKey).collect(toList());
if (existingCollector.getTagKeys().equals(tagKeys)) {
consumer.accept(existingCollector);
return existingCollector;
}
meterRegistrationFailed(id, "Prometheus requires that all meters with the same name have the same" +
" set of tag keys. There is already an existing meter named '" + id.getName() + "' containing tag keys [" +
String.join(", ", collectorMap.get(getConventionName(id)).getTagKeys()) + "]. The meter you are attempting to register" +
" has keys [" + getConventionTags(id).stream().map(Tag::getKey).collect(joining(", ")) + "].");
return null;
});
}

with

  private MicrometerCollector collectorByName(Meter.Id id) {
    return collectorMap.compute(
        getConventionName(id),
        (name, existingCollector) -> {
          List<String> tagKeys = getConventionTags(id).stream().map(Tag::getKey).collect(toList());
          if (existingCollector != null && existingCollector.getTagKeys().equals(tagKeys)) {
            return existingCollector;
          }
          return new MicrometerCollector(id, config().namingConvention(), prometheusConfig)
              .register(registry);
        });
  }

And eliminate the source of this developer friction when using this registry.

@schlangz

This comment has been minimized.

@dnowak-wrldrmt
Copy link

dnowak-wrldrmt commented Jul 8, 2020

Hi, I have encountered the issue as I add some tags (like error name) basing on the result of an operation.

This actually is enforced by the underlying Prometheus client that Micrometer is delegating to. It's possible for distinct processes to send metrics with different sets of tag keys, but Prometheus does not allow it from a single process.

All Micrometer is doing is giving you a more readable error message about it before the Prometheus client blows up.

I have checked Prometheus documentation and there is no information about such requirement - looks like the metric name can be associated with varying number of tags/labels:

Labels enable Prometheus's dimensional data model: any given combination of labels for the same metric name identifies a particular dimensional instantiation of that metric (for example: all HTTP requests that used the method POST to the /api/tracks handler). The query language allows filtering and aggregation based on these dimensions. Changing any label value, including adding or removing a label, will create a new time series.

Source: https://prometheus.io/docs/concepts/data_model/

Looks like the Micrometer's integration is too strict.

@checketts
Copy link
Contributor

I have checked Prometheus documentation and there is no information about such requirement - looks like the metric name can be associated with varying number of tags/labels:

Thanks for checking the documentation!

Unfortunately, the underlying Prometheus client isn't implemented to that specification. See https://github.com/prometheus/client_java/blob/master/simpleclient/src/main/java/io/prometheus/client/SimpleCollector.java#L63

So you could open a ticket with that repo to try to get it improved, or if you can think of a good way to work around that limitation that would also be welcome.

@vladak

This comment has been minimized.

@dnowak-wrldrmt

This comment has been minimized.

@vladak

This comment has been minimized.

@borowis

This comment has been minimized.

@borowis

This comment has been minimized.

@shakuzen shakuzen added the registry: prometheus A Prometheus Registry related issue label Jan 25, 2021
@shakuzen shakuzen changed the title PrometheusMeterRegistry requires meters with the same name to have same set of tag keys Support meters with the same name but different set of tag keys in PrometheusMeterRegistry Jan 25, 2021
@shakuzen shakuzen removed the question A user question, probably better suited for StackOverflow label Jan 25, 2021
@shakuzen
Copy link
Member

Hi, I have encountered the issue as I add some tags (like error name) basing on the result of an operation.

@dnowak-wrldrmt How we dealt with this in, for example, Spring Web MVC metrics is always having an exception tag; the value is none if there was no exception.

I'm reopening this for some more consideration as this keeps coming up and we've made recent changes to not throw an exception anymore by default in these cases. The current state is still that the scrape will not contain metrics with different labels, though. I'm reopening to see if we can make that happen somehow.

After some investigation into this, I will echo what @checketts said in #877 (comment) that the Prometheus Java client does not seem to support this by design. That being said, it is indeed true that the Prometheus server can and will scrape metrics with different labels, with no problem as far as I can tell.

@fieldju thank you for the tests. That was useful to look at. The modifications, however, are causing meters to end up as different metrics with the same name, which violates the Prometheus scrape format, even though the Prometheus server seems to have no problem scraping this.

Specifically, your modifications result in a scrape like:

# HELP foo_total  
# TYPE foo_total counter
foo_total{hostname="localhost",optionalExtraMetadata="my-cool-value",} 1.0
# HELP foo_total  
# TYPE foo_total counter
foo_total{hostname="localhost",} 1.0

The above violates the following snippets from the Prometheus scrape format:

Only one HELP line may exist for any given metric name.

All lines for a given metric must be provided as one single group, with the optional HELP and TYPE lines first (in no particular order).

So my understanding is that the scrape should be like:

# HELP foo_total  
# TYPE foo_total counter
foo_total{hostname="localhost",optionalExtraMetadata="my-cool-value",} 1.0
foo_total{hostname="localhost",} 1.0

Trying to register things with the Prometheus client so this happens results in the aforementioned exception. We need to find a way to properly register them, which may not be currently exist. In that case, we would need to see if the Prometheus Java client is willing to make changes to support this, as suggested by @checketts. For example, here is a pure Prometheus Java client unit test that fails trying to do what this issue asks to support:

@Test
void promClientSameMetricNameDifferentLabels() throws IOException {
    CollectorRegistry registry = new CollectorRegistry();
    Counter counter = Counter.build().name("my_counter").labelNames("hi").help("unhelpful string")
            .register(registry);
    Counter counter1 = Counter.build().name("my_counter").labelNames("hello").help("unhelpful string")
            .register(registry); // fails with "Collector already registered that provides name: my_counter"

    counter.labels("world").inc(2);
    counter1.labels("world").inc(3);

    StringWriter writer = new StringWriter();
    TextFormat.write004(writer, registry.metricFamilySamples());
    System.out.println(writer.toString());
}

@shakuzen
Copy link
Member

I've opened prometheus/client_java#696 in the repository for the official Java client for Prometheus to get some input from the Prometheus experts on this.

@stolsvik
Copy link

Just want to chime in that the result of effectively just "dropping" later attempts at making any meter with same name but different tags (when using PrometheusMeterRegistry), without any warning or anything in the logs, has been very frustrating, costing me several hours of debugging and then googling before finding these issues.

It says abundantly clearly in all documentation that meters are unique as long as the combination of name and tags are different. It quickly became obvious that changing the name of the second set of meters with different tags "fixed" the problem, but this is not what the docs state.

@stolsvik
Copy link

stolsvik commented Oct 16, 2021

Note that the observed behavior is that when adding new meters with the same name but differing in the keys (number or name) of the Tags, the later registrations are ignored. When only the value of the Tags are changed, it works correctly. Therefore, a "fix" is to make both sets of meters (having the same name) also have the same set of Tags, where the irrelevant tags of each subset is set to a fixed dummy value (or maybe even better, the empty string)

@flozano
Copy link

flozano commented May 2, 2023

this issue keeps coming back, not only in kafka but for example in micrometer apache client interceptor(async instrumentation) + executor (sync instrumentation)- if you use both, you get slightly different set of tags (sync version includes "outcome", async version does not), and your async metrics are gone.

@rdehuyss
Copy link

Thanks @stolsvik! With your input I at least have a workaround.

@wyp900917
Copy link

Hi @jdbranham, have you been able to fix or work around this? I have a similar problem that PrometheusMeterRegistry is in conflict with something else, but instead of KafkaConsumerMetrics, it's because I'm using Spring Boot actuator.

If I annotate the controller method with just @timed() with no parameters, there is no error. However, if I use @timed with a name argument, like this:

    @Timed("some-operation")
    public void someOperation() {}

... then it errors when someOperation() is called. Hypothesis: both Actuator and Prometheus are trying to use the annotation to create a metric (each with their own set of tags), but Prometheus requires the name be unique.

Stack trace:

java.lang.IllegalArgumentException: Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'some_operation_seconds' containing tag keys [class, exception, method]. The meter you are attempting to register has keys [exception, method, outcome, status, uri].
	at io.micrometer.prometheus.PrometheusMeterRegistry.lambda$collectorByName$9(PrometheusMeterRegistry.java:382) ~[micrometer-registry-prometheus-1.3.5.jar:1.3.5]
...
org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.getTimer(WebMvcMetricsFilter.java:185) ~[spring-boot-actuator-2.2.5.RELEASE.jar:2.2.5.RELEASE]

I've also tried to upgrade to a newer version of micrometer (1.4.1), but the result is the same.

have you resolve this problem?

@iRitiLopes
Copy link

Thanks @stolsvik! With your input I at least have a workaround.

How? Forcing to keep all tags as the same?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
registry: prometheus A Prometheus Registry related issue
Projects
None yet
Development

No branches or pull requests