Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dbnode][coordinator] Support match[] in label endpoints #3180

Merged
merged 33 commits into from
Feb 8, 2021

Conversation

wesleyk
Copy link
Collaborator

@wesleyk wesleyk commented Feb 5, 2021

What this PR does / why we need it:

Support match[] in label endpoints

Does this PR introduce a user-facing and/or backwards incompatible change?:
NONE

Does this PR require updating code package or user-facing documentation?:
NONE

@robskillington robskillington changed the title Support match[] in label endpoints [dbnode][coordinator] Support match[] in label endpoints Feb 5, 2021
@codecov
Copy link

codecov bot commented Feb 7, 2021

Codecov Report

Merging #3180 (98f75e7) into master (fe7c77e) will increase coverage by 0.0%.
The diff coverage is 64.8%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3180   +/-   ##
=======================================
  Coverage    72.3%    72.3%           
=======================================
  Files        1087     1087           
  Lines      100712   100701   -11     
=======================================
+ Hits        72819    72840   +21     
+ Misses      22831    22802   -29     
+ Partials     5062     5059    -3     
Flag Coverage Δ
aggregator 75.8% <ø> (-0.1%) ⬇️
cluster 85.0% <ø> (ø)
collector 84.3% <ø> (ø)
dbnode 78.7% <77.5%> (-0.1%) ⬇️
m3em 74.4% <ø> (ø)
m3ninx 73.2% <71.1%> (+<0.1%) ⬆️
metrics 20.0% <ø> (ø)
msg 74.0% <ø> (+0.2%) ⬆️
query 67.2% <54.5%> (+<0.1%) ⬆️
x 80.5% <ø> (+0.2%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fe7c77e...98f75e7. Read the comment docs.

Comment on lines 410 to 432
# Test label search with match
ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/labels?start=0&end=9999999999999.99999" | jq -r ".data | length") -gt 3 ]]'

ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/labels?match[]=label_metric&start=0&end=9999999999999.99999" | jq -r ".data | length") -eq 4 ]]'

ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/labels?match[]=label_metric_2&start=0&end=9999999999999.99999" | jq -r ".data | length") -eq 3 ]]'

# Test label values search with match
ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/label/name_1/values?start=0&end=9999999999999.99999" | jq -r ".data | length") -eq 2 ]]' # two values without a match

ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/label/name_1/values?match[]=label_metric&start=0&end=9999999999999.99999" | jq -r ".data | length") -eq 1 ]]'
ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/label/name_1/values?match[]=label_metric&start=0&end=9999999999999.99999" | jq -r ".data[0]") = "value_1_1" ]]'

ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/label/name_1/values?match[]=label_metric_2&start=0&end=9999999999999.99999" | jq -r ".data | length") -eq 1 ]]'
ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/label/name_1/values?match[]=label_metric_2&start=0&end=9999999999999.99999" | jq -r ".data[0]") = "value_1_2" ]]'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Maybe we don't specify start/end to test that they default to the right min/max values if not specified?


# Test label search with match
ATTEMPTS=5 TIMEOUT=2 MAX_TIMEOUT=4 retry_with_backoff \
'[[ $(curl -s "0.0.0.0:7201/api/v1/labels?start=0&end=9999999999999.99999" | jq -r ".data | length") -gt 3 ]]'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm this could start failing if any other global labels are used in metrics sent to the coordinator I believe if match[] is not specified?

Maybe it would be better to see if the values are contained by the result or not.

I think you can use a reduce and filter statement to get a unique count of the terms you expect and compare against that?

Might end up with a long line obviously but I think that this is probably better than flakey tests. Here's an example:

$ echo '["foo","bar","baz"]' | jq -Mc 'reduce (.[] | select(. == "foo" or . == "bar")) as $item ({}; .[$item] += 1)'
{"foo":1,"bar":1}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah that was the intention of >= 3. Still being explicit would be good so yeah will use that.

@robskillington robskillington merged commit 6d1e01e into master Feb 8, 2021
@robskillington robskillington deleted the matchers-for-label-endpoints branch February 8, 2021 06:30
soundvibe added a commit that referenced this pull request Feb 10, 2021
* master: (30 commits)
  [dbnode] Use go context to cancel index query workers after timeout (#3194)
  [aggregator] Fix change ActivePlacement semantics on close (#3201)
  [aggregator] Simplify (Active)StagedPlacement API (#3199)
  [aggregator] Checking if metadata is set to default should not cause copying (#3198)
  [dbnode] Remove readers and writer from aggregator API (#3122)
  [aggregator] Avoid large copies in entry rollup comparisons by making them more inline-friendly (#3195)
  [dbnode] Re-add aggregator doc limit update (#3137)
  [m3db] Do not close reader in filterFieldsIterator.Close() (#3196)
  Revert "Remove disk series read limit (#3174)" (#3193)
  [instrument] Improve sampled timer and stopwatch performance (#3191)
  Omit unset fields in metadata json (#3189)
  [dbnode] Remove left-over code in storage/bootstrap/bootstrapper (#3190)
  [dbnode][coordinator] Support match[] in label endpoints (#3180)
  Instrument the worker pool with the wait time (#3188)
  Instrument query path (#3182)
  [aggregator] Remove indirection, large copy from unaggregated protobuf decoder (#3186)
  [aggregator] Sample timers completely (#3184)
  [aggregator] Reduce error handling overhead in rawtcp server (#3183)
  [aggregator] Move shardID calculation out of critical section (#3179)
  Move instrumentation cleanup to FetchTaggedResultIterator Close() (#3173)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants