-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus Job and Instance labels #575
Comments
It is so crucial bug and still open from more than 6 months |
I can confirm this is still the current behaviour. I tried setting the The relabel_config I'm using is: - source_labels: [__meta_kubernetes_pod_label_k8s_app]
target_label: job And I'm seeing this error:
When I set the In my case, I'll be using Prometheus to scrape the otelcol instance, so I'll just use labels other than I'd like to help fix this, though I'm not sure where to start. |
The error comes from
The problem is the
However, in otel's prometheus receiver, it can't distinguish the source of job label
One way to workaround as a user is instead of using Another way to workaround this is having the receiver figure out a way to have a |
Prometheus receiver needs How to get the target?Currently, there's no good way to get the correct target when receiving metrics from Prometheus. The Appender interface doesn't support target information propagation. Yet the target is known to the appender when it is first initiated. Ref
It's possible to add target metadata to Appender creation process. Also there's another open PR (prometheus/prometheus#7771) from Prometheus repo that suggests to pass target metadata into
Re/label restriction in Prometheus receiverToday, prometheus receiver could only use
The
Although Prometheus receiver should be able to identify Before the issue is fixed, I would suggest people avoid changing these two labels. |
cc @odeke-em |
We will automatically add job and instance labels, this is a known bug. Related to open-telemetry/prometheus-interoperability-spec#7. |
I've submitted a fix for auto labeling. #2897. Please let me know if it helps. |
…2897) In Prometheus, `job` and `instance` are the two auto generated labels, however they are both dropped by prometheus receiver. Although these information is still available in `service.name` and `host`:`port`, it breaks the data contract for most Prometheus users (who use `job` and `instance` to consume metrics in their own system). This PR adds `job` and `instance` as well-known labels in prometheus receiver to fix the issue. **Link to tracking Issue:** #575 #2499 #2363 open-telemetry/prometheus-interoperability-spec#7
Duplicate of open-telemetry/prometheus-interoperability-spec#37. |
…orter (#2979) This is a follow up to #2897. Fixes #575 Fixes #2499 Fixes #2363 Fixes open-telemetry/prometheus-interoperability-spec#37 Fixes open-telemetry/prometheus-interoperability-spec#39 Fixes open-telemetry/prometheus-interoperability-spec#44 Passing compliance tests: $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/Job.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector === RUN TestRemoteWrite/otelcollector/JobLabel === PAUSE TestRemoteWrite/otelcollector/JobLabel === CONT TestRemoteWrite/otelcollector/JobLabel --- PASS: TestRemoteWrite (10.02s) --- PASS: TestRemoteWrite/otelcollector (0.00s) --- PASS: TestRemoteWrite/otelcollector/JobLabel (10.02s) PASS ok github.com/prometheus/compliance/remote_write 10.382s $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/Instance.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector === RUN TestRemoteWrite/otelcollector/InstanceLabel === PAUSE TestRemoteWrite/otelcollector/InstanceLabel === CONT TestRemoteWrite/otelcollector/InstanceLabel --- PASS: TestRemoteWrite (10.01s) --- PASS: TestRemoteWrite/otelcollector (0.00s) --- PASS: TestRemoteWrite/otelcollector/InstanceLabel (10.01s) PASS ok github.com/prometheus/compliance/remote_write 10.291s $ go test --tags=compliance -run "TestRemoteWrite/otelcollector/RepeatedLabels.+" -v ./ === RUN TestRemoteWrite === RUN TestRemoteWrite/otelcollector --- PASS: TestRemoteWrite (0.00s) --- PASS: TestRemoteWrite/otelcollector (0.00s) testing: warning: no tests to run PASS
prom receiver is expecting metric's job name same as the one in config. In order to keep similar behaviour as cloudwatch agent's discovery impl, we support getting job name from docker label, but it will break metric type. For long term solution, see open-telemetry/opentelemetry-collector#575 (comment)
#3785) * ext: ecsobserver Add filter and export yaml * ext: ecsobserver Add unit test for overall discovery - Update README * ext: ecsobserver Rename exported structs * ext: ecsobserver Merge duplicated create test fetcher - Stop the collector process from extension using `host.ReportFatalError` otherwiese the failure of extension just log. * ext: ecsobserver Explain the rename job lable logic prom receiver is expecting metric's job name same as the one in config. In order to keep similar behaviour as cloudwatch agent's discovery impl, we support getting job name from docker label, but it will break metric type. For long term solution, see open-telemetry/opentelemetry-collector#575 (comment) * ext: ecsoberver Inject test fetcher using config Was using context. * ext: ecsobserver Test fatal error on component.Host * ext: ecsobserver Move fetcher injection to its own func * ext: ecsobserver Add comment to example config in README
* Remove GetDescriptor * Add Must var hotfix
When using the Prometheus receiver and exporter I no longer get the
job
orinstance
labels that are default in Prometheus. Is this expected behaviour?When Prometheus scrapes them I lose these fields even with
honor_labels: true
. I have manually checked the metrics and they are missing these labels so it's not an issue with Prometheus. Is there a way to manually add them back in with somerelabel_configs
magic? I have tried but it seems to get ignored whenever I use thejob
orinstance
labels.I eventually want two exporters here one of which can be scraped locally by Prometheus and the other will forward onto another collecter. Our default use case is without Prometheus so scraping Prometheus with the collector isn't an option.
Version: v0.2.3
The text was updated successfully, but these errors were encountered: