Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/prometheus] collector do not support metric histogram with temporality = DELTA #19153

Closed
caorong opened this issue Feb 28, 2023 · 10 comments
Labels
bug Something isn't working closed as inactive exporter/prometheus question Further information is requested Stale

Comments

@caorong
Copy link

caorong commented Feb 28, 2023

Describe the bug
A clear and concise description of what the bug is.

i use java-instrument to collect metric and use optl-collector to collect metric, and use prometheus to scrap metric from optl-collector and store.

when i use java-instrument agent with below config

props:
-Dotel.javaagent.debug=true
-Dotel.exporter.otlp.metrics.temporality.preference=CUMULATIVE
-Dotel.exporter.otlp.metrics.default.histogram.aggregation=EXPLICIT_BUCKET_HISTOGRAM

env:
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_METRICS_EXPORTER=otlp

when I curl http://localhost:8889/metrics
it works fine. data present in metrics.

but if i change -Dotel.exporter.otlp.metrics.temporality.preference to DELTA

when I curl http://localhost:8889/metrics
it has problem. DELTA data not present in metrics.

Steps to reproduce
java client to produce metric

Meter meter = GlobalOpenTelemetry.get().getMeter(this.getClass().getName());
LongHistogram histogram = meter.histogramBuilder(metric).ofLongs().build();


for (int i = 0; i < 1000000; i++) {
            histogram.record(1, attr("sample-metric"))
            Thread.sleep(random.nextInt(1000));
        }


private static Attributes attr(String metric, String... tags) {
        AttributesBuilder builder = Attributes.builder().put("service", SERVICE_NAME).put("metric", metric);
        for (int i = 0; i < tags.length - 1; i++) {
            builder.put(tags[i], tags[++i]);
        }
        return builder.build();
    }

optl-collector's log looks well , but not show with curl http://localhost:8889/metrics

InstrumentationScope com.woo.monitor.OpenTelemetryService
Metric #0
Descriptor:
     -> Name: handleExecutionReport
     -> Description:
     -> Unit:
     -> DataType: Histogram
     -> AggregationTemporality: Delta
HistogramDataPoints #0
Data point attributes:
     -> metric: Str(handleExecutionReport)
     -> service: Str(dubbo-test)
StartTimestamp: 2023-02-28 06:50:04.63905 +0000 UTC
Timestamp: 2023-02-28 06:50:14.637153 +0000 UTC
Count: 5
Sum: 100.000000
Min: 0.000000
Max: 40.000000
ExplicitBounds #0: 0.000000
ExplicitBounds open-telemetry/opentelemetry-collector#1: 5.000000
ExplicitBounds open-telemetry/opentelemetry-collector#2: 10.000000
ExplicitBounds open-telemetry/opentelemetry-collector#3: 25.000000
ExplicitBounds open-telemetry/opentelemetry-collector#4: 50.000000
ExplicitBounds open-telemetry/community#39: 75.000000
ExplicitBounds open-telemetry/opentelemetry-collector#6: 100.000000
ExplicitBounds open-telemetry/opentelemetry-collector#7: 250.000000
ExplicitBounds open-telemetry/opentelemetry-collector#8: 500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#9: 750.000000
ExplicitBounds open-telemetry/opentelemetry-collector#10: 1000.000000
ExplicitBounds open-telemetry/opentelemetry-collector#11: 2500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#12: 5000.000000
ExplicitBounds open-telemetry/opentelemetry-collector#13: 7500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#14: 10000.000000
Buckets #0, Count: 1
Buckets open-telemetry/opentelemetry-collector#1, Count: 0
Buckets open-telemetry/opentelemetry-collector#2, Count: 1
Buckets open-telemetry/opentelemetry-collector#3, Count: 1
Buckets open-telemetry/opentelemetry-collector#4, Count: 2
Buckets open-telemetry/community#39, Count: 0
Buckets open-telemetry/opentelemetry-collector#6, Count: 0
Buckets open-telemetry/opentelemetry-collector#7, Count: 0
Buckets open-telemetry/opentelemetry-collector#8, Count: 0
Buckets open-telemetry/opentelemetry-collector#9, Count: 0
Buckets open-telemetry/opentelemetry-collector#10, Count: 0
Buckets open-telemetry/opentelemetry-collector#11, Count: 0
Buckets open-telemetry/opentelemetry-collector#12, Count: 0
Buckets open-telemetry/opentelemetry-collector#13, Count: 0
Buckets open-telemetry/opentelemetry-collector#14, Count: 0
Buckets open-telemetry/opentelemetry-collector#15, Count: 0

What did you expect to see?
A clear and concise description of what you expected to see.

metric should present on curl http://localhost:8889/metrics when use temporality with temporality = DELTA

What did you see instead?
A clear and concise description of what you saw instead.

metric did not present on curl http://localhost:8889/metrics when use temporality with temporality = DELTA

What version did you use?
Version: (e.g., v0.4.0, 1eb551b, etc)

v0.72.0

What config did you use?
Config: (e.g. the yaml config file)

extensions:
  health_check:
  pprof:
    endpoint: 0.0.0.0:1777
  zpages:
    endpoint: 0.0.0.0:55679

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:

  opencensus:

  # Collect own metrics
  prometheus:
    config:
      scrape_configs:
      - job_name: 'otel-collector'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']

  jaeger:
    protocols:
      grpc:
      thrift_binary:
      thrift_compact:
      thrift_http:

  zipkin:

processors:
  batch:

exporters:
  logging:
    verbosity: detailed

  prometheus:
    endpoint: 0.0.0.0:8889
    namespace: default

service:

  pipelines:

    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging]

    metrics:
      receivers: [otlp, prometheus]
      processors: [batch]
      exporters: [logging, prometheus]

  extensions: [health_check, pprof, zpages]

Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OS: macos
use binary no compiled

Additional context
Add any other context about the problem here.

optl-collecor log:

ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope com.woo.monitor.OpenTelemetryService
Metric #0
Descriptor:
     -> Name: handleExecutionReport
     -> Description:
     -> Unit:
     -> DataType: Histogram
     -> AggregationTemporality: Delta
HistogramDataPoints #0
Data point attributes:
     -> metric: Str(handleExecutionReport)
     -> service: Str(dubbo-test)
StartTimestamp: 2023-02-28 07:35:30.929069 +0000 UTC
Timestamp: 2023-02-28 07:35:40.932712 +0000 UTC
Count: 18
Sum: 72450.000000
Min: 3940.000000
Max: 4110.000000
ExplicitBounds #0: 0.000000
ExplicitBounds open-telemetry/opentelemetry-collector#1: 5.000000
ExplicitBounds open-telemetry/opentelemetry-collector#2: 10.000000
ExplicitBounds open-telemetry/opentelemetry-collector#3: 25.000000
ExplicitBounds open-telemetry/opentelemetry-collector#4: 50.000000
ExplicitBounds open-telemetry/community#39: 75.000000
ExplicitBounds open-telemetry/opentelemetry-collector#6: 100.000000
ExplicitBounds open-telemetry/opentelemetry-collector#7: 250.000000
ExplicitBounds open-telemetry/opentelemetry-collector#8: 500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#9: 750.000000
ExplicitBounds open-telemetry/opentelemetry-collector#10: 1000.000000
ExplicitBounds open-telemetry/opentelemetry-collector#11: 2500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#12: 5000.000000
ExplicitBounds open-telemetry/opentelemetry-collector#13: 7500.000000
ExplicitBounds open-telemetry/opentelemetry-collector#14: 10000.000000
Buckets #0, Count: 0
Buckets open-telemetry/opentelemetry-collector#1, Count: 0
Buckets open-telemetry/opentelemetry-collector#2, Count: 0
Buckets open-telemetry/opentelemetry-collector#3, Count: 0
Buckets open-telemetry/opentelemetry-collector#4, Count: 0
Buckets open-telemetry/community#39, Count: 0
Buckets open-telemetry/opentelemetry-collector#6, Count: 0
Buckets open-telemetry/opentelemetry-collector#7, Count: 0
Buckets open-telemetry/opentelemetry-collector#8, Count: 0
Buckets open-telemetry/opentelemetry-collector#9, Count: 0
Buckets open-telemetry/opentelemetry-collector#10, Count: 0
Buckets open-telemetry/opentelemetry-collector#11, Count: 0
Buckets open-telemetry/opentelemetry-collector#12, Count: 18
Buckets open-telemetry/opentelemetry-collector#13, Count: 0
Buckets open-telemetry/opentelemetry-collector#14, Count: 0
Buckets open-telemetry/opentelemetry-collector#15, Count: 0

curl data:

# HELP default_otelcol_exporter_enqueue_failed_log_records Number of log records failed to be added to the sending queue.
# TYPE default_otelcol_exporter_enqueue_failed_log_records counter
default_otelcol_exporter_enqueue_failed_log_records{exporter="logging",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
default_otelcol_exporter_enqueue_failed_log_records{exporter="prometheus",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
# HELP default_otelcol_exporter_enqueue_failed_metric_points Number of metric points failed to be added to the sending queue.
# TYPE default_otelcol_exporter_enqueue_failed_metric_points counter
default_otelcol_exporter_enqueue_failed_metric_points{exporter="logging",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
default_otelcol_exporter_enqueue_failed_metric_points{exporter="prometheus",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
# HELP default_otelcol_exporter_enqueue_failed_spans Number of spans failed to be added to the sending queue.
# TYPE default_otelcol_exporter_enqueue_failed_spans counter
default_otelcol_exporter_enqueue_failed_spans{exporter="logging",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
default_otelcol_exporter_enqueue_failed_spans{exporter="prometheus",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
# HELP default_otelcol_exporter_send_failed_metric_points Number of metric points in failed attempts to send to destination.
# TYPE default_otelcol_exporter_send_failed_metric_points counter
default_otelcol_exporter_send_failed_metric_points{exporter="logging",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
default_otelcol_exporter_send_failed_metric_points{exporter="prometheus",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0
# HELP default_otelcol_exporter_sent_metric_points Number of metric points successfully sent to destination.
# TYPE default_otelcol_exporter_sent_metric_points counter
default_otelcol_exporter_sent_metric_points{exporter="logging",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 937
default_otelcol_exporter_sent_metric_points{exporter="prometheus",instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 937
# HELP default_otelcol_process_cpu_seconds Total CPU user and system time in seconds
# TYPE default_otelcol_process_cpu_seconds counter
default_otelcol_process_cpu_seconds{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 0.47000000000000003
# HELP default_otelcol_process_memory_rss Total physical memory (resident set size)
# TYPE default_otelcol_process_memory_rss gauge
default_otelcol_process_memory_rss{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 4.8562176e+07
# HELP default_otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')
# TYPE default_otelcol_process_runtime_heap_alloc_bytes gauge
default_otelcol_process_runtime_heap_alloc_bytes{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 1.6218976e+07
# HELP default_otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')
# TYPE default_otelcol_process_runtime_total_alloc_bytes counter
default_otelcol_process_runtime_total_alloc_bytes{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 3.1089888e+07
# HELP default_otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys')
# TYPE default_otelcol_process_runtime_total_sys_memory_bytes gauge
default_otelcol_process_runtime_total_sys_memory_bytes{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 4.3627784e+07
# HELP default_otelcol_process_uptime Uptime of the process
# TYPE default_otelcol_process_uptime counter
default_otelcol_process_uptime{instance="0.0.0.0:8888",job="otel-collector",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 109.267695
# HELP default_otelcol_processor_batch_batch_send_size Number of units in the batch
# TYPE default_otelcol_processor_batch_batch_send_size histogram
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="10"} 0
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="25"} 0
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="50"} 10
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="75"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="100"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="250"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="500"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="750"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="1000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="2000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="3000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="4000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="5000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="6000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="7000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="8000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="9000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="10000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="20000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="30000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="50000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="100000"} 23
default_otelcol_processor_batch_batch_send_size_bucket{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",le="+Inf"} 23
default_otelcol_processor_batch_batch_send_size_sum{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 937
default_otelcol_processor_batch_batch_send_size_count{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 23
# HELP default_otelcol_processor_batch_timeout_trigger_send Number of times the batch was sent due to a timeout trigger
# TYPE default_otelcol_processor_batch_timeout_trigger_send counter
default_otelcol_processor_batch_timeout_trigger_send{instance="0.0.0.0:8888",job="otel-collector",processor="batch",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0"} 23
# HELP default_otelcol_receiver_accepted_metric_points Number of metric points successfully pushed into the pipeline.
# TYPE default_otelcol_receiver_accepted_metric_points counter
default_otelcol_receiver_accepted_metric_points{instance="0.0.0.0:8888",job="otel-collector",receiver="otlp",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",transport="grpc"} 669
default_otelcol_receiver_accepted_metric_points{instance="0.0.0.0:8888",job="otel-collector",receiver="prometheus",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",transport="http"} 268
# HELP default_otelcol_receiver_refused_metric_points Number of metric points that could not be pushed into the pipeline.
# TYPE default_otelcol_receiver_refused_metric_points counter
default_otelcol_receiver_refused_metric_points{instance="0.0.0.0:8888",job="otel-collector",receiver="otlp",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",transport="grpc"} 0
default_otelcol_receiver_refused_metric_points{instance="0.0.0.0:8888",job="otel-collector",receiver="prometheus",service_instance_id="5c2292b4-36f8-4d9d-af15-3ae9505c78dc",service_name="otelcol",service_version="0.66.0",transport="http"} 0
# HELP default_otlp_exporter_exported 
# TYPE default_otlp_exporter_exported counter
default_otlp_exporter_exported{job="woo-optl-server",success="true",type="span"} 17
# HELP default_otlp_exporter_seen 
# TYPE default_otlp_exporter_seen counter
default_otlp_exporter_seen{job="woo-optl-server",type="span"} 17
# HELP default_process_runtime_jvm_buffer_count The number of buffers in the pool
# TYPE default_process_runtime_jvm_buffer_count gauge
default_process_runtime_jvm_buffer_count{job="woo-optl-client",pool="direct"} 10
default_process_runtime_jvm_buffer_count{job="woo-optl-client",pool="mapped"} 0
default_process_runtime_jvm_buffer_count{job="woo-optl-client",pool="mapped - 'non-volatile memory'"} 0
default_process_runtime_jvm_buffer_count{job="woo-optl-server",pool="direct"} 22
default_process_runtime_jvm_buffer_count{job="woo-optl-server",pool="mapped"} 0
default_process_runtime_jvm_buffer_count{job="woo-optl-server",pool="mapped - 'non-volatile memory'"} 0
# HELP default_process_runtime_jvm_buffer_limit Total capacity of the buffers in this pool
# TYPE default_process_runtime_jvm_buffer_limit gauge
default_process_runtime_jvm_buffer_limit{job="woo-optl-client",pool="direct"} 1.6817809e+07
default_process_runtime_jvm_buffer_limit{job="woo-optl-client",pool="mapped"} 0
default_process_runtime_jvm_buffer_limit{job="woo-optl-client",pool="mapped - 'non-volatile memory'"} 0
default_process_runtime_jvm_buffer_limit{job="woo-optl-server",pool="direct"} 2.18152972e+08
default_process_runtime_jvm_buffer_limit{job="woo-optl-server",pool="mapped"} 0
default_process_runtime_jvm_buffer_limit{job="woo-optl-server",pool="mapped - 'non-volatile memory'"} 0
# HELP default_process_runtime_jvm_buffer_usage Memory that the Java virtual machine is using for this buffer pool
# TYPE default_process_runtime_jvm_buffer_usage gauge
default_process_runtime_jvm_buffer_usage{job="woo-optl-client",pool="direct"} 1.681781e+07
default_process_runtime_jvm_buffer_usage{job="woo-optl-client",pool="mapped"} 0
default_process_runtime_jvm_buffer_usage{job="woo-optl-client",pool="mapped - 'non-volatile memory'"} 0
default_process_runtime_jvm_buffer_usage{job="woo-optl-server",pool="direct"} 2.18152973e+08
default_process_runtime_jvm_buffer_usage{job="woo-optl-server",pool="mapped"} 0
default_process_runtime_jvm_buffer_usage{job="woo-optl-server",pool="mapped - 'non-volatile memory'"} 0
# HELP default_process_runtime_jvm_classes_current_loaded Number of classes currently loaded
# TYPE default_process_runtime_jvm_classes_current_loaded gauge
default_process_runtime_jvm_classes_current_loaded{job="woo-optl-client"} 9557
default_process_runtime_jvm_classes_current_loaded{job="woo-optl-server"} 9250
# HELP default_process_runtime_jvm_classes_loaded Number of classes loaded since JVM start
# TYPE default_process_runtime_jvm_classes_loaded counter
default_process_runtime_jvm_classes_loaded{job="woo-optl-client"} 1
default_process_runtime_jvm_classes_loaded{job="woo-optl-server"} 9251
# HELP default_process_runtime_jvm_classes_unloaded Number of classes unloaded since JVM start
# TYPE default_process_runtime_jvm_classes_unloaded counter
default_process_runtime_jvm_classes_unloaded{job="woo-optl-client"} 0
default_process_runtime_jvm_classes_unloaded{job="woo-optl-server"} 1
# HELP default_process_runtime_jvm_cpu_utilization Recent cpu utilization for the process
# TYPE default_process_runtime_jvm_cpu_utilization gauge
default_process_runtime_jvm_cpu_utilization{job="woo-optl-client"} 0.0003069271916685456
default_process_runtime_jvm_cpu_utilization{job="woo-optl-server"} 8.46374404946861e-05
# HELP default_process_runtime_jvm_gc_duration Duration of JVM garbage collection actions
# TYPE default_process_runtime_jvm_gc_duration histogram
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="0"} 0
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="5"} 0
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="10"} 0
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="25"} 7
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="50"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="75"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="100"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="250"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="500"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="750"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="1000"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="2500"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="5000"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="7500"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="10000"} 8
default_process_runtime_jvm_gc_duration_bucket{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server",le="+Inf"} 8
default_process_runtime_jvm_gc_duration_sum{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server"} 141
default_process_runtime_jvm_gc_duration_count{action="end of minor GC",gc="G1 Young Generation",job="woo-optl-server"} 8
# HELP default_process_runtime_jvm_memory_committed Measure of memory committed
# TYPE default_process_runtime_jvm_memory_committed gauge
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="CodeHeap 'non-nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 4.325376e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.1665408e+07
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="Compressed Class Space",type="non_heap"} 6.291456e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="G1 Eden Space",type="heap"} 2.01326592e+08
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="G1 Old Gen",type="heap"} 1.21634816e+08
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="G1 Survivor Space",type="heap"} 4.194304e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-client",pool="Metaspace",type="non_heap"} 5.0003968e+07
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="CodeHeap 'non-nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 4.25984e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.2255232e+07
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="Compressed Class Space",type="non_heap"} 6.029312e+06
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="G1 Eden Space",type="heap"} 1.92937984e+08
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="G1 Old Gen",type="heap"} 1.21634816e+08
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="G1 Survivor Space",type="heap"} 1.2582912e+07
default_process_runtime_jvm_memory_committed{job="woo-optl-server",pool="Metaspace",type="non_heap"} 4.8365568e+07
# HELP default_process_runtime_jvm_memory_init Measure of initial memory requested
# TYPE default_process_runtime_jvm_memory_init gauge
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="CodeHeap 'non-nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="Compressed Class Space",type="non_heap"} 0
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="G1 Eden Space",type="heap"} 2.9360128e+07
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="G1 Old Gen",type="heap"} 5.07510784e+08
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="G1 Survivor Space",type="heap"} 0
default_process_runtime_jvm_memory_init{job="woo-optl-client",pool="Metaspace",type="non_heap"} 0
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="CodeHeap 'non-nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 2.555904e+06
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="Compressed Class Space",type="non_heap"} 0
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="G1 Eden Space",type="heap"} 2.9360128e+07
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="G1 Old Gen",type="heap"} 5.07510784e+08
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="G1 Survivor Space",type="heap"} 0
default_process_runtime_jvm_memory_init{job="woo-optl-server",pool="Metaspace",type="non_heap"} 0
# HELP default_process_runtime_jvm_memory_limit Measure of max obtainable memory
# TYPE default_process_runtime_jvm_memory_limit gauge
default_process_runtime_jvm_memory_limit{job="woo-optl-client",pool="CodeHeap 'non-nmethods'",type="non_heap"} 5.840896e+06
default_process_runtime_jvm_memory_limit{job="woo-optl-client",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 1.22908672e+08
default_process_runtime_jvm_memory_limit{job="woo-optl-client",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.22908672e+08
default_process_runtime_jvm_memory_limit{job="woo-optl-client",pool="Compressed Class Space",type="non_heap"} 1.073741824e+09
default_process_runtime_jvm_memory_limit{job="woo-optl-client",pool="G1 Old Gen",type="heap"} 8.589934592e+09
default_process_runtime_jvm_memory_limit{job="woo-optl-server",pool="CodeHeap 'non-nmethods'",type="non_heap"} 5.840896e+06
default_process_runtime_jvm_memory_limit{job="woo-optl-server",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 1.22908672e+08
default_process_runtime_jvm_memory_limit{job="woo-optl-server",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.22908672e+08
default_process_runtime_jvm_memory_limit{job="woo-optl-server",pool="Compressed Class Space",type="non_heap"} 1.073741824e+09
default_process_runtime_jvm_memory_limit{job="woo-optl-server",pool="G1 Old Gen",type="heap"} 8.589934592e+09
# HELP default_process_runtime_jvm_memory_usage Measure of memory used
# TYPE default_process_runtime_jvm_memory_usage gauge
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="CodeHeap 'non-nmethods'",type="non_heap"} 1.441024e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 4.264576e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.164864e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="Compressed Class Space",type="non_heap"} 6.075184e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="G1 Eden Space",type="heap"} 1.09051904e+08
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="G1 Old Gen",type="heap"} 2.4949248e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="G1 Survivor Space",type="heap"} 3.8352e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-client",pool="Metaspace",type="non_heap"} 4.9571272e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="CodeHeap 'non-nmethods'",type="non_heap"} 1.379584e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="CodeHeap 'non-profiled nmethods'",type="non_heap"} 4.222464e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="CodeHeap 'profiled nmethods'",type="non_heap"} 1.2209152e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="Compressed Class Space",type="non_heap"} 5.871968e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="G1 Eden Space",type="heap"} 1.2582912e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="G1 Old Gen",type="heap"} 2.297344e+07
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="G1 Survivor Space",type="heap"} 9.550288e+06
default_process_runtime_jvm_memory_usage{job="woo-optl-server",pool="Metaspace",type="non_heap"} 4.8008368e+07
# HELP default_process_runtime_jvm_memory_usage_after_last_gc Measure of memory used after the most recent garbage collection event on this pool
# TYPE default_process_runtime_jvm_memory_usage_after_last_gc gauge
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-client",pool="G1 Eden Space",type="heap"} 0
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-client",pool="G1 Old Gen",type="heap"} 0
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-client",pool="G1 Survivor Space",type="heap"} 3.8352e+06
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-server",pool="G1 Eden Space",type="heap"} 0
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-server",pool="G1 Old Gen",type="heap"} 0
default_process_runtime_jvm_memory_usage_after_last_gc{job="woo-optl-server",pool="G1 Survivor Space",type="heap"} 9.550288e+06
# HELP default_process_runtime_jvm_system_cpu_load_1m Average CPU load of the whole system for the last minute
# TYPE default_process_runtime_jvm_system_cpu_load_1m gauge
default_process_runtime_jvm_system_cpu_load_1m{job="woo-optl-client"} 4.08837890625
default_process_runtime_jvm_system_cpu_load_1m{job="woo-optl-server"} 5.126953125
# HELP default_process_runtime_jvm_system_cpu_utilization Recent cpu utilization for the whole system
# TYPE default_process_runtime_jvm_system_cpu_utilization gauge
default_process_runtime_jvm_system_cpu_utilization{job="woo-optl-client"} 0.12908605737158105
default_process_runtime_jvm_system_cpu_utilization{job="woo-optl-server"} 0.19529133967636642
# HELP default_process_runtime_jvm_threads_count Number of executing threads
# TYPE default_process_runtime_jvm_threads_count gauge
default_process_runtime_jvm_threads_count{daemon="false",job="woo-optl-client"} 1
default_process_runtime_jvm_threads_count{daemon="false",job="woo-optl-server"} 1
default_process_runtime_jvm_threads_count{daemon="true",job="woo-optl-client"} 28
default_process_runtime_jvm_threads_count{daemon="true",job="woo-optl-server"} 92
# HELP default_processedSpans The number of spans processed by the BatchSpanProcessor. [dropped=true if they were dropped due to high throughput]
# TYPE default_processedSpans counter
default_processedSpans{dropped="false",job="woo-optl-server",spanProcessorType="BatchSpanProcessor"} 17
# HELP default_queueSize The number of spans queued
# TYPE default_queueSize gauge
default_queueSize{job="woo-optl-client",spanProcessorType="BatchSpanProcessor"} 0
default_queueSize{job="woo-optl-server",spanProcessorType="BatchSpanProcessor"} 0
# HELP default_scrape_duration_seconds Duration of the scrape
# TYPE default_scrape_duration_seconds gauge
default_scrape_duration_seconds{instance="0.0.0.0:8888",job="otel-collector"} 0.010252193
# HELP default_scrape_samples_post_metric_relabeling The number of samples remaining after metric relabeling was applied
# TYPE default_scrape_samples_post_metric_relabeling gauge
default_scrape_samples_post_metric_relabeling{instance="0.0.0.0:8888",job="otel-collector"} 46
# HELP default_scrape_samples_scraped The number of samples the target exposed
# TYPE default_scrape_samples_scraped gauge
default_scrape_samples_scraped{instance="0.0.0.0:8888",job="otel-collector"} 46
# HELP default_scrape_series_added The approximate number of new series in this scrape
# TYPE default_scrape_series_added gauge
default_scrape_series_added{instance="0.0.0.0:8888",job="otel-collector"} 46
# HELP default_target_info Target metadata
# TYPE default_target_info gauge
default_target_info{http_scheme="http",instance="0.0.0.0:8888",job="otel-collector",net_host_port="8888"} 1
# HELP default_up The scraping was successful
# TYPE default_up gauge
default_up{instance="0.0.0.0:8888",job="otel-collector"} 1
@caorong caorong added the bug Something isn't working label Feb 28, 2023
@mx-psi mx-psi transferred this issue from open-telemetry/opentelemetry-collector Mar 1, 2023
@mx-psi mx-psi changed the title collector do not support metric histogram with temporality = DELTA [exporter/prometheus] collector do not support metric histogram with temporality = DELTA Mar 1, 2023
@mx-psi mx-psi added question Further information is requested exporter/prometheus labels Mar 1, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2023

Pinging code owners for exporter/prometheus: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@mx-psi
Copy link
Member

mx-psi commented Mar 1, 2023

AIUI this is by design, you need to convert them to cumulative temporality and otherwise they will be dropped

OpenTelemetry Histograms with Delta aggregation temporality SHOULD be aggregated into a Cumulative aggregation temporality and follow the logic above, or MUST be dropped.

I will let @Aneurysm9 describe the best way to handle this

@Aneurysm9
Copy link
Member

#9006 looks to implement that conversion logic for delta histograms.

@github-actions
Copy link
Contributor

github-actions bot commented May 1, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 3, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Jul 3, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2023

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2023
@mx-psi mx-psi reopened this Sep 1, 2023
@mx-psi mx-psi removed the Stale label Sep 1, 2023
@hampusrosvall
Copy link

@mx-psi what is the latest status on this issue?

@mx-psi
Copy link
Member

mx-psi commented Oct 24, 2023

@hampusrosvall I am not actively keeping track of this issue, it is not fixed and there does not seem to be any open PRs trying to fix it but I don't have more information.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Dec 25, 2023
djaglowski pushed a commit that referenced this issue Jan 8, 2024
…#23790)

**Description:** <Describe what has changed.>
This continues the work done in the now closed
[PR](#20530).
I have addressed issues raised in the original PR by
- Adding logic to handle timestamp misalignments 
- Adding fix + a out-of-bounds bug  

In addition, I have performed end-to-end testing in a local setup, and
confirmed that accumulated histogram time series are correct.

**Link to tracking Issue:** <Issue number if applicable>

#4968

#9006

#19153
**Testing:** <Describe what testing was performed and which tests were
added.>
Added tests for timestamp misalignment and an out-of-bounds bug
discovered in the previous PR.
End-to-end testing to ensure histogram bucket counts exported to
Prometheus are correct

---------

Signed-off-by: Loc Mai <locmai0201@gmail.com>
Signed-off-by: xchen <xchen@axon.com>
Signed-off-by: stephenchen <x.chen1016@gmail.com>
Co-authored-by: Lev Popov <nabam@nabam.net>
Co-authored-by: Lev Popov <leo@nabam.net>
Co-authored-by: Anthony Mirabella <a9@aneurysm9.com>
Co-authored-by: Loc Mai <locmai0201@gmail.com>
Co-authored-by: Alex Boten <aboten@lightstep.com>
cparkins pushed a commit to AmadeusITGroup/opentelemetry-collector-contrib that referenced this issue Jan 10, 2024
…open-telemetry#23790)

**Description:** <Describe what has changed.>
This continues the work done in the now closed
[PR](open-telemetry#20530).
I have addressed issues raised in the original PR by
- Adding logic to handle timestamp misalignments 
- Adding fix + a out-of-bounds bug  

In addition, I have performed end-to-end testing in a local setup, and
confirmed that accumulated histogram time series are correct.

**Link to tracking Issue:** <Issue number if applicable>

open-telemetry#4968

open-telemetry#9006

open-telemetry#19153
**Testing:** <Describe what testing was performed and which tests were
added.>
Added tests for timestamp misalignment and an out-of-bounds bug
discovered in the previous PR.
End-to-end testing to ensure histogram bucket counts exported to
Prometheus are correct

---------

Signed-off-by: Loc Mai <locmai0201@gmail.com>
Signed-off-by: xchen <xchen@axon.com>
Signed-off-by: stephenchen <x.chen1016@gmail.com>
Co-authored-by: Lev Popov <nabam@nabam.net>
Co-authored-by: Lev Popov <leo@nabam.net>
Co-authored-by: Anthony Mirabella <a9@aneurysm9.com>
Co-authored-by: Loc Mai <locmai0201@gmail.com>
Co-authored-by: Alex Boten <aboten@lightstep.com>
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working closed as inactive exporter/prometheus question Further information is requested Stale
Projects
None yet
4 participants