Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Input: Metrics Dropped from Outputs When Overlapping Name Exists #7705

Closed
tgraskemper opened this issue Jun 18, 2020 · 4 comments · Fixed by #7740
Closed

Prometheus Input: Metrics Dropped from Outputs When Overlapping Name Exists #7705

tgraskemper opened this issue Jun 18, 2020 · 4 comments · Fixed by #7740
Labels
area/prometheus bug unexpected problem or unintended behavior
Milestone

Comments

@tgraskemper
Copy link

tgraskemper commented Jun 18, 2020

Relevant telegraf.conf:

input.conf
[inputs.prometheus]
metric_version = 2
tagexclude = ["url"]
urls = ["http://localhost:8203/{app_name}/manage/prometheus"]
[inputs.prometheus.tags]
application = "{app_name}"
component = "{app_component}"
product = "{app_product}"

outputs.conf
[outputs.prometheus_client]
listen = ":9126"
metric_version = 2

System info:

Telegraf version: 1.14.4
Operating System: Linux {hostname} 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Micrometer version: 1.4.1

Steps to reproduce:

  1. Run Springboot application with Micrometer version specified
  2. Compare micrometer endpoint metrics to telegraf endpoint metrics

Expected behavior:

I expect that all metrics pulled from the inputs to be available at the prometheus output endpoint.

Actual behavior:

Metrics are ignored when measurement name overlaps.

Additional info:

curl http://localhost:8203/{appname}/manage/prometheus | grep http_

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 21298  100 21298    0     0  346# HELP http_server_requests_seconds  
6k # TYPE http_server_requests_seconds summary
     http_server_requests_seconds_count{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/fs/file/{entity}/{uuid}",} 6.0
0 --:http_server_requests_seconds_sum{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/fs/file/{entity}/{uuid}",} 0.189196678
--http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 31.0
:--http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 0.848579549
 --http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 10.0
:http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.075894945
-http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 40.0
-http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 0.244792639
:http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 5.0
-http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.678818459
-http_server_requests_seconds_count{exception="None",method="HEAD",outcome="SUCCESS",status="200",uri="/manage/health",} 1.0
 http_server_requests_seconds_sum{exception="None",method="HEAD",outcome="SUCCESS",status="200",uri="/manage/health",} 0.006859294
-http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/prometheus",} 1089.0
-http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/prometheus",} 6.067267536
:http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/health",} 3160.0
-http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/health",} 9.796686909
-http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 9.0
:http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 0.221391574
-http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/health",} 2168.0
-http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/health",} 7.71047573
 http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 16.0
3http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 0.498193506
4http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/info",} 4861.0
6http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/info",} 11.43168424
6http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 14.0
khttp_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 0.224857192

http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 6.0
http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.108679761
http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/info",} 54496.0
http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/info",} 77.109188151
http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 10.0
http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 0.058950662
# HELP http_server_requests_seconds_max  
# TYPE http_server_requests_seconds_max gauge
http_server_requests_seconds_max{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="HEAD",outcome="SUCCESS",status="200",uri="/manage/health",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/prometheus",} 0.007259675
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/health",} 0.003720458
http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/health",} 0.295731296
http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/manage/info",} 0.0
http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/fs/meta/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/v1/fs/file/{entity}/{uuid}",} 0.0
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/manage/info",} 0.004401147
http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/v1/fs/meta/{entity}/{uuid}",} 0.0

curl http://localhost:9126/metrics | grep http_

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0# HELP http_server_requests_seconds_max Telegraf collected metric
# TYPE http_server_requests_seconds_max gauge
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="DELETE",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="CLIENT_ERROR",product="product",provider="op",region="region",status="404",uri="/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/fs/meta/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/manage/health"} 0.003720458
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/manage/info"} 0.004401147
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/manage/prometheus"} 0.005457704
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/fs/meta/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/manage/health"} 0.295731296
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="GET",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/manage/info"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="HEAD",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/manage/health"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="POST",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="POST",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/fs/meta/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="POST",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/fs/file/{entity}/{uuid}"} 0
http_server_requests_seconds_max{application="appname",component="api",environment="qa",exception="None",method="POST",outcome="SUCCESS",product="product",provider="op",region="region",status="200",uri="/v1/fs/meta/{entity}/{uuid}"} 0
@danielnelson
Copy link
Contributor

It looks to me like TYPE http_server_requests_seconds summary is being lost because it is incomplete, all summary types should have one or more quantile labels, for example:

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 7.4545e-05
go_gc_duration_seconds{quantile="0.25"} 7.6999e-05
go_gc_duration_seconds{quantile="0.5"} 0.000277935
go_gc_duration_seconds{quantile="0.75"} 0.000706591
go_gc_duration_seconds{quantile="1"} 0.000706591
go_gc_duration_seconds_sum 0.00113607
go_gc_duration_seconds_count 4

@tgraskemper
Copy link
Author

I had forgotten to add that this occurred after the switch to metric_version 2. When inputs and outputs were metric_version 1, we did not see this issue.

That being said, if what you're saying is true, then I suppose this could be an issue with micrometer. I'm not sure I understand why this is requiring quantiles to rewrite and output metrics to prometheus, regardless of type. Could you explain the reasoning why it behaves this way?

@tgraskemper
Copy link
Author

tgraskemper commented Jun 19, 2020

I've run telegraf with debug enabled to attempt to get more information on this.

Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Starting Telegraf 1.14.4
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Loaded inputs: prometheus exec prometheus docker exec ntpq prometheus
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Loaded aggregators:
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Loaded processors:
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Loaded outputs: prometheus_client
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! Tags enabled: environment=qa provider=op region=region
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"", Flush Interval:10s
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z D! [agent] Initializing plugins
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z D! [agent] Connecting outputs
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z D! [agent] Attempting connection to [outputs.prometheus_client]
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z I! [outputs.prometheus_client] Listening on http://[::]:9126/metrics
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z D! [agent] Successfully connected to outputs.prometheus_client
Jun 19 08:39:52 servername.dnszone telegraf[32216]: 2020-06-19T15:39:52Z D! [agent] Starting service inputs
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 5.552898ms
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Buffer fullness: 4 / 10000 metrics
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 20.647139ms
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Buffer fullness: 2912 / 10000 metrics
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 16.362104ms
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Buffer fullness: 2565 / 10000 metrics
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 16.729145ms
Jun 19 08:40:00 servername.dnszone telegraf[32216]: 2020-06-19T15:40:00Z D! [outputs.prometheus_client] Buffer fullness: 1565 / 10000 metrics
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 9.58389ms
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Buffer fullness: 1581 / 10000 metrics
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 6.099433ms
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Buffer fullness: 2149 / 10000 metrics
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.582464ms
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Buffer fullness: 2814 / 10000 metrics
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 10.082389ms
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Buffer fullness: 3145 / 10000 metrics
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 10.109086ms
Jun 19 08:40:10 servername.dnszone telegraf[32216]: 2020-06-19T15:40:10Z D! [outputs.prometheus_client] Buffer fullness: 2145 / 10000 metrics
Jun 19 08:40:12 servername.dnszone telegraf[32216]: 2020-06-19T15:40:12Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.15061ms
Jun 19 08:40:12 servername.dnszone telegraf[32216]: 2020-06-19T15:40:12Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 6.86393ms
Jun 19 08:40:12 servername.dnszone telegraf[32216]: 2020-06-19T15:40:12Z D! [outputs.prometheus_client] Wrote batch of 159 metrics in 1.203885ms
Jun 19 08:40:12 servername.dnszone telegraf[32216]: 2020-06-19T15:40:12Z D! [outputs.prometheus_client] Buffer fullness: 0 / 10000 metrics
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 6.295212ms
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Buffer fullness: 2 / 10000 metrics
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.492225ms
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Buffer fullness: 1430 / 10000 metrics
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.733686ms
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Buffer fullness: 2243 / 10000 metrics
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 7.490444ms
Jun 19 08:40:20 servername.dnszone telegraf[32216]: 2020-06-19T15:40:20Z D! [outputs.prometheus_client] Buffer fullness: 1566 / 10000 metrics
Jun 19 08:40:23 servername.dnszone telegraf[32216]: 2020-06-19T15:40:23Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 7.394221ms
Jun 19 08:40:23 servername.dnszone telegraf[32216]: 2020-06-19T15:40:23Z D! [outputs.prometheus_client] Wrote batch of 580 metrics in 5.106376ms
Jun 19 08:40:23 servername.dnszone telegraf[32216]: 2020-06-19T15:40:23Z D! [outputs.prometheus_client] Buffer fullness: 0 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.140831ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 2 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 13.747438ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 1801 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 10.835743ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 1749 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 13.268818ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 1566 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 14.56946ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 566 / 10000 metrics
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Wrote batch of 566 metrics in 4.797504ms
Jun 19 08:40:30 servername.dnszone telegraf[32216]: 2020-06-19T15:40:30Z D! [outputs.prometheus_client] Buffer fullness: 0 / 10000 metrics
Jun 19 08:40:40 servername.dnszone telegraf[32216]: 2020-06-19T15:40:40Z D! [outputs.prometheus_client] Wrote batch of 1000 metrics in 8.768974ms
Jun 19 08:40:40 servername.dnszone telegraf[32216]: 2020-06-19T15:40:40Z D! [outputs.prometheus_client] Buffer fullness: 16 / 10000 metrics
Jun 19 08:40:40 servername.dnszone telegraf[32216]: 2020-06-19T15:40:40Z D! [outputs.prometheus_client] Wrote batch of 19 metrics in 2.117924ms
Jun 19 08:40:40 servername.dnszone telegraf[32216]: 2020-06-19T15:40:40Z D! [outputs.prometheus_client] Buffer fullness: 0 / 10000 metrics

@danielnelson danielnelson added bug unexpected problem or unintended behavior and removed need more info labels Jun 22, 2020
@danielnelson danielnelson added this to the 1.14.5 milestone Jun 22, 2020
@danielnelson
Copy link
Contributor

I don't remember why I disallowed it originally but it doesn't seem to be a requirement that a SUMMARY have any quantile entries, so I'll remove the restriction (#7740).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/prometheus bug unexpected problem or unintended behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants