Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Test Cases for JMX -> Prom Exporter Regexps #14155

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

suddendust
Copy link
Contributor

Instructions:
This PR adds test cases to validate the regexps used to translate JMX metrics to Prom consumable format by the JMX->Prom exporter. Ref: https://github.com/prometheus/jmx_exporter.
The test case validates the following:

  1. All meters/gauges/timers defined in ServerMeter.java, ServerGauge.java or ServerTimer.java are exported. This gives use some level of confidence while adding any new metric to code, that it is being exported (we will still need to add a test case for asserting on the different labels, etc).

  2. Existing metrics have the right name and labels.

This is a WIP PR. I have raised it early to get some early feedback.

@codecov-commenter
Copy link

codecov-commenter commented Oct 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 34.41%. Comparing base (59551e4) to head (417a8a6).
Report is 1135 commits behind head on master.

❗ There is a different number of reports uploaded between BASE (59551e4) and HEAD (417a8a6). Click for more details.

HEAD has 29 uploads less than BASE
Flag BASE (59551e4) HEAD (417a8a6)
integration 7 4
integration2 3 2
temurin 12 6
java-21 7 5
skip-bytebuffers-true 3 2
skip-bytebuffers-false 7 3
unittests 5 2
unittests1 2 0
java-11 5 1
unittests2 3 2
integration1 2 1
custom-integration1 2 1
Additional details and impacted files
@@              Coverage Diff              @@
##             master   #14155       +/-   ##
=============================================
- Coverage     61.75%   34.41%   -27.34%     
- Complexity      207      739      +532     
=============================================
  Files          2436     2621      +185     
  Lines        133233   144074    +10841     
  Branches      20636    22041     +1405     
=============================================
- Hits          82274    49585    -32689     
- Misses        44911    90493    +45582     
+ Partials       6048     3996     -2052     
Flag Coverage Δ
custom-integration1 100.00% <ø> (+99.99%) ⬆️
integration 100.00% <ø> (+99.99%) ⬆️
integration1 100.00% <ø> (+99.99%) ⬆️
integration2 0.00% <ø> (ø)
java-11 34.32% <ø> (-27.39%) ⬇️
java-21 34.40% <ø> (-27.22%) ⬇️
skip-bytebuffers-false 34.32% <ø> (-27.43%) ⬇️
skip-bytebuffers-true 34.40% <ø> (+6.67%) ⬆️
temurin 34.41% <ø> (-27.34%) ⬇️
unittests 34.41% <ø> (-27.34%) ⬇️
unittests1 ?
unittests2 34.41% <ø> (+6.68%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@Jackie-Jiang Jackie-Jiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the test!

Right now all tests are hard-coded and it won't be able to capture newly added metrics automatically. Instead, can we loop over the enums to ensure all the metrics are tested, which is also future proof?

@suddendust
Copy link
Contributor Author

suddendust commented Oct 3, 2024

can we loop over the enums to ensure all the metrics are tested, which is also future proof?

Yes, that's the ideal way to do it. However, exported metric names are not standardised so it's not possible to derive it from the enum names (we should standardize them going forward). Further, metrics accept different kind of arguments for labelling. For example, some accept rawTableName, some accept tableNameWithType and some accept clientId (tableNameWithType-partition-topic). Determining these would be on a case-to-case basis.

@suddendust
Copy link
Contributor Author

suddendust commented Oct 3, 2024

it won't be able to capture newly added metrics automatically

For this, I have added a check in each test case that basically asserts on the count of metrics exported. For any newly added metric, this check would fail. It's not fool-proof but does provide a basic check. Perhaps I can strengthen this further by adding a check that verifies that for each enum, we have an exported metric that contains the enum string in some form.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants