-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for metrics in groupbyattrsprocessor #6232
Labels
Comments
bertysentry
added a commit
to sentrysoftware/opentelemetry-collector-contrib
that referenced
this issue
Nov 30, 2021
…or *Metrics* signal * Added support for *Metrics* signal * Fixed bug when overlapping attributes in parsed record and original *Resource* * Added details and examples to **README.md** * Some refactoring for clarification
bertysentry
added a commit
to sentrysoftware/opentelemetry-collector-contrib
that referenced
this issue
Dec 2, 2021
bertysentry
added a commit
to sentrysoftware/opentelemetry-collector-contrib
that referenced
this issue
Dec 6, 2021
…ssues after code review
jpkrohling
pushed a commit
that referenced
this issue
Dec 7, 2021
…or (#6248) * [issue-#6232] **groupbyattrsprocessor** Added support for *Metrics* signal * Added support for *Metrics* signal * Fixed bug when overlapping attributes in parsed record and original *Resource* * Added details and examples to **README.md** * Some refactoring for clarification * [issue-#6232] Run `make goporto` * [issue-#6232] **groupbyattrs** Fixed a bunch of minor issues after code review
Resolved via #6248 |
pmm-sumo
added
data:metrics
Metric related issues
enhancement
New feature or request
labels
Feb 15, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Use Case
When ingesting Prometheus metrics with the prometheusreceiver, all metrics are associated to the same Resource, which is simply the scraped target host (which happens to be localhost if you're using prometheusexecreceiver).
But the ingested Prometheus metrics may have a label that indicates that the metric relates to a different host (not the one where the Prometheus exporter is running), and this information cannot be leveraged and properly represented as a different Resource.
Without the proper Resource attachment, it is impossible for exporters to properly expose the Prometheus metrics (except the Prometheus exporters, which do not have such concept). Example: When exporting the metrics to Datadog (for example), only Resources (with a "host.name" label) are considered and displayed as Hosts.
Solution
This problem has already been solved with the groupbyattrs processor, which leverages the labels in a clever way to attach traces and logs to corresponding Resources. But this processor supports Traces and Logs signals only, unfortunately. Adding support for Metrics in the groupbyattrs processor would solve the problem very elegantly and would make Prometheus metrics ingestion much better translated into OTL metrics.
Alternatives
I've considered leveraging the metricstransform processor (which has a Group-By-Metric feature, but that doesn't allow using values coming from labels).
I've also considered the resourcedetection processor, but it doesn't process metrics and associates all metrics to the same Resource.
More information
Example of Prometheus metrics being consumed:
...do not appear as 2 separate hosts in Datadog, as they are associated to
localhost
:because Datadog simply creates Hosts based on OpenTelemetry Resources (with a proper
host.name
label).The text was updated successfully, but these errors were encountered: