-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Metrics SDK] Performance improvement in measurement processing #1993
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #1993 +/- ##
==========================================
+ Coverage 87.32% 87.32% +0.01%
==========================================
Files 166 166
Lines 4673 4723 +50
==========================================
+ Hits 4080 4124 +44
- Misses 593 599 +6
|
This is ready for review now. @ThomsonTan , @esigo please help in review this PR. We can enable exemplar once this is merged, as exemplar has it's own performance cost. |
Co-authored-by: Tom Tan <lilotom@gmail.com>
Added few more changes to reduce the lock contention (by removing Number of threads : 1
Number of threads: 5
Number of threads: 19
So, |
sdk/include/opentelemetry/sdk/metrics/state/attributes_hashmap.h
Outdated
Show resolved
Hide resolved
The results are more promising while building the application in Release mode (similar results on Windows and Linux). Number of threads : 1
Number of threads: 5
Number of threads: 19
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for the PR :)
sorry for the delay
This PR attempts to improve the measurement processing throughput - with benchmark as the time taken by SDK to process 1 million measurements with multiple dimensions and high cardinality.
The change is to remove the unnecessary memory copy operations done for each input measurement. The measurement is now copied to
OrderedAttributeMap
only if it is not already existing in theAttributesHashMap
.Existing flow:
instrument::Add(value, attributes) -> Copy attributes to OrderedAttributeMap -> Calculate hash for attributes in OrderedAttributeMap -> Add attributes to AttributesHashMap if not existing (by comparing hash calculated in step 3) -> Aggregate(value)
New flow:
instrument::Add(value, attributes) -> Calculate hash for attributes in KeyValueIterable -> Copy attributes to OrderedAttributeMap, and then add to AttributesHashMap if not existing (by comparing hash calculated in step 2) -> Aggregate(value)
The processing time for 1 million measurements is reduced from 3.7 secs to 1.9 secs ( on CPU: Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz, 20 processors, Memory: 32GB)
The test results are for below scenario:
(The benchmark code is added as part of this PR -
measurements_benchmark.cc
:Benchmark with this PR changes:
Benchmark with existing code:
Changes
Please provide a brief description of the changes here.
For significant contributions please make sure you have completed the following items:
CHANGELOG.md
updated for non-trivial changes