-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify optional Exponential Histogram Aggregation, add example code in the data model #2252
Conversation
@MrAlias FYI, this is meant to assist with eventually merging the reference implementation in OTel-Go. |
The equations included in this PR are tested in the corresponding OTel-Go PR: open-telemetry/opentelemetry-go#2502 |
@beorn7 I would like your feedback on this proposal. To help you consider the options, consider an OpenTelemetry SDK standing in as a Prometheus client. You have perfect control over the histogram behavior: you can choose a fixed scale factor and have variable size, or you can choose range limits, fixed size and fixed scale, and I believe in a Prometheus setting these decisions should be made up front. As a first attempt, I outlined an exponential histogram aggregator with one mandatory setting (size) and one optional setting (range limits). The user wouldn't ever set scale directly under this proposal. What do you think? cc/ @brian-brazil |
Yes, thank you @brianbrazil-seczetta. @brian-brazil one day we hope to add you to the OTel organization 😁 |
@jmacd Thanks for pinging me. I have trouble finding time to look at this in detail (since I'm working heads-down on the Prometheus histograms, hopefully getting them to a state where less is in flux and I can give better answers to OTel questions ;). I'm not quite sure what your request is here. In Prometheus, the instrumented binary decides what to expose, and then independent scrapers can do with it what they want (including reducing the resolution AKA increasing the bucket width if they want to store at lower resolution). https://github.com/prometheus/client_golang/blob/70253f4dd027a7128cdd681c22448d65ca30eed7/prometheus/histogram.go#L384-L407 is how to set the scale. https://github.com/prometheus/client_golang/blob/70253f4dd027a7128cdd681c22448d65ca30eed7/prometheus/histogram.go#L421-L440 sets the strategy how to limit bucket numbers, which, at a first glance, looks similar to what's proposed here. Does this help? I'm sorry if I missed the point while just skimming this PR. Feel free to ask more specific questions, and I'll try my best in the time given to me. |
Thank you @beorn7, very helpful feedback. Compatibility note: In OTel's protocol and this document I am trying to avoid letting the user set scale directly, because it is difficult to reason about. I was proposing that users either (a) do not set scale, or (b) configure max-size and min/max range limits, which imply a fixed scale. I see that your configuration is more flexible. For OTel to emulate the behavior implied by your sparse-histogram settings, I will have to make an adjustment in this proposal, in probably two parts:
By the way, I see you have OTel reviewers, if you think having the ability to directly set the scale matters, please say so. I'll update this PR with point (1) above (i.e. make limits independent) and we can address (2) when the time comes. Thank you! |
Actually not. It's more like the
Yeah, and that's why we do what's described above. The growth factor gives you a good intuition of the precision the histogram provides.
I guess with the same line of argument that leads to a zero bucket of finite width, one might require an "overflow bucket" and an "underflow bucket" for observations exceeding a configurable max/min value. That would be nicely symmetric, but I guess, the demand hasn't come up in practice because it is relatively easy to "accidentally" create observations very close to zero (due to floating point arithmetic precision issues, or if the observations are coming from actual physical measurements), while I would assume the cases where you accidentally create extremely large observations are much less common. So far, we have gone for not doing "overflow/underflow buckets", but if someone has relevant need, please let me know.
I think this is all a misunderstanding, see above. In OTel terms, the Prometheus histograms (in the current PoC state) always have an integer scale between -8 and 4, and histograms can always be merged precisely to a histogram with the least common resolution. |
By the way, thank you @beorn7 for clarifying: I understand why scaleFactor is a float now, and see no disagreements. :-) |
…ication into jmacd/sdkexpohisto
Co-authored-by: Aaron Abbott <aaronabbott@google.com>
…pecification into jmacd/sdkexpohisto
Co-authored-by: Aaron Abbott <aaronabbott@google.com>
@reyang The remaining question in this PR gets to a bigger question about handling NaN and Inf values in the Metrics API. Should Exponential Histograms treat NaN and Inf values any differently than Counter instruments. If no, what to do we expect? If yes, what do we expect? |
I think none of these should block this PR. And I think the API spec doesn't care about NaN/Inf. Here goes something I did for .NET (and it's still experimental) |
@aabmass @oertl what's your take on #2252 (comment)? I'm trying to understand your position - do you think we need to discuss more here (and we should not merge the PR before you feel comfortable to sign off), or we are good to merge this PR and have separate conversation about the NaN/Inf/limits? Thanks! |
@reyang I added two commits. d15ea33 is meant to address a question from Slack about Prometheus' exponential histogram interoperability. 1a473d0 is meant to answer the question you're asking; I believe we have sufficiently general statements about treating Inf and NaN values, however for histogram aggregation I think we want all-or-none behavior, meaning the Sum, Count, Min, Max, and Buckets should be consistent. Since we can't have consistent results with Inf and NaN values because they do not map into valid buckets, I think histogram implementations MUST disregard these. See what you think. |
Hey Team OpenTelemetry. CTO/Cofounder of a startup that values your work and uses OpenTelemetry here. Is this going to be merged soon? We need this PR merged for some resolution changes we have internally at our company. I'd appreciate if you could merge 🙏 |
…ication into jmacd/sdkexpohisto
…pecification into jmacd/sdkexpohisto
…ication into jmacd/sdkexpohisto
Part of #1935.
This protocol was released in OTLP v0.11.
Related OTEP 149.
Changes
In open-telemetry/opentelemetry-collector#4642 I introduced temporary support for printing the exponential histogram data point. I used equations added to the data model as examples here.
This document is meant to justify merging the reference implementation shown in this (unfortunately LARGE) PR: open-telemetry/opentelemetry-go#2393
I've proposed to split it into two parts: open-telemetry/opentelemetry-go#2501. The implementation is described in further detail in its README.
The specification changes in this PR describe the critical aspects of this reference implementation in terms of the new Aggregations' two configuration parameters: MaxSize, and RangeLimits (optional) and two requirements for its behavior.
As further motivation, there is a PR to add
receiver/statsdreceiver
support using this reference implementation, open-telemetry/opentelemetry-collector-contrib#6666, which is waiting for everything above to be merged.