Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to discard traces and metrics using the filterprocessor #36780

Closed
ixiaoyi93 opened this issue Dec 11, 2024 · 5 comments
Closed

Failed to discard traces and metrics using the filterprocessor #36780

ixiaoyi93 opened this issue Dec 11, 2024 · 5 comments
Labels
processor/filter Filter processor question Further information is requested

Comments

@ixiaoyi93
Copy link

Component(s)

processor/filter

What happened?

Description

The http.route=/xxx/health related links are discarded by the filtering processor, and the links are pipelined to be converted to Prometheus-supported metrics via spanmetrics, but it is still possible to see the http_server_request_count in Prometheus. duration_seconds_countmetric for thehttp.route=/xxx/health` tag.

Steps to Reproduce

Expected Result

In Prometheus, http_server_request_duration_seconds_count{http_route=~"/xxx/health"} returns 0.

Actual Result

continue to grow.

Collector version

v0.114.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  prometheus:
    config:
      scrape_configs:
      - job_name: 'otelcol'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']

processors:
  filter:
    error_mode: ignore
    traces:
      span:
        - IsMatch(attributes["http.route"], ".*health.*")
  memory_limiter:
    check_interval: 1s
    limit_percentage: 75
    spike_limit_percentage: 15
  batch:
    send_batch_size: 10000
    timeout: 10s

connectors:
  spanmetrics:
    histogram:
      explicit:
        buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
    dimensions:
      - name: http.request.method
      - name: http.response.status_code
      - name: http.route
    exemplars:
      enabled: true
    dimensions_cache_size: 1000
    aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
    metrics_flush_interval: 15s
    metrics_expiration: 5m
    events:
      enabled: true
      dimensions:
        - name: exception.type
        - name: exception.message
    resource_metrics_key_attributes:
      - service.name
      - telemetry.sdk.language
      - telemetry.sdk.name

exporters:
  debug:
    verbosity: detailed
  otlp:
    endpoint: tempo:4317
    tls:
      insecure: true
  prometheus:
    endpoint: "0.0.0.0:8889"

extensions:
  health_check:

service:
  extensions: [health_check]
  telemetry:
    logs:
      level: "debug"
    metrics:
      address: "0.0.0.0:8888"
  pipelines:
    traces:
      receivers: [otlp]
      processors: [filter, memory_limiter, batch]
      exporters: [otlp, spanmetrics]
    metrics:
      receivers: [otlp, prometheus]
      processors: [memory_limiter, batch]
      exporters: [prometheus]
    metrics/spanmetrics:
      receivers: [spanmetrics]
      processors: [memory_limiter, batch]
      exporters: [debug, prometheus]

Log output

No response

Additional context

No response

@ixiaoyi93 ixiaoyi93 added bug Something isn't working needs triage New item requiring triage labels Dec 11, 2024
@github-actions github-actions bot added the processor/filter Filter processor label Dec 11, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@bacherfl
Copy link
Contributor

Hi @ixiaoyi93! I tried to reproduce the issue but so far it seems like the filtering is working as expected. In your config I see the you have two metrics pipelines defined, where one is receiving metrics via the otlp receiver - Could it be that the unwanted metric is received via that one and then forwarded to the prometheus exporter?
To verify this, can you please try to start the collector with only the metrics/spanmetrics pipeline enabled and see if the issue persists with this configuration as well?

@ixiaoyi93
Copy link
Author

ixiaoyi93 commented Dec 17, 2024

Hi @ixiaoyi93! I tried to reproduce the issue but so far it seems like the filtering is working as expected. In your config I see the you have two metrics pipelines defined, where one is receiving metrics via the otlp receiver - Could it be that the unwanted metric is received via that one and then forwarded to the prometheus exporter? To verify this, can you please try to start the collector with only the metrics/spanmetrics pipeline enabled and see if the issue persists with this configuration as well?

Thank you very much for your reply! I tried through your suggestions and if I change any of the following configurations, I will have problems.

service:
  extensions: [health_check]
  telemetry:
    logs:
      level: "debug"
    metrics:
      address: "0.0.0.0:8888"
  pipelines:
    traces:
      receivers: [otlp]
      processors: [filter, memory_limiter, batch]
      exporters: [otlp, spanmetrics]
    metrics:
      receivers: [otlp, prometheus]
      processors: [memory_limiter, batch]
      exporters: [prometheus]
    metrics/spanmetrics:
      receivers: [spanmetrics]
      processors: [memory_limiter, batch]
      exporters: [prometheus]

For example, if I remove otlp from pipelines.mestics.receivers, then I will not be able to get the http_server_request_duration_seconds_bucket.

My scenario is to remove some access about monitoring checks, such as URLs like /matterserver/health or /matterserver/index.html.Because these monitoring checks are very common in kubernetes, they will generate a lot of traces and metrics.

@ixiaoyi93
Copy link
Author

Thanks, I've solved it.

@bacherfl bacherfl added question Further information is requested and removed bug Something isn't working needs triage New item requiring triage labels Dec 17, 2024
@bacherfl
Copy link
Contributor

bacherfl commented Dec 17, 2024

Thank you for the update @ixiaoyi93! Glad to hear it's working - In this case I will close this issue now, but if there are any further questions, feel free to reopen or create a new issue.

Just out of curiosity - what was the solution in this case? This may be of help for others who run into the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
processor/filter Filter processor question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants