Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aggregator] Fix change ActivePlacement semantics on close #3201

Merged
merged 1 commit into from
Feb 10, 2021

Conversation

vdarulis
Copy link
Collaborator

@vdarulis vdarulis commented Feb 9, 2021

What this PR does / why we need it:

Relax closing semantics, as it's causing spurious errors - previously, the locking guaranteed precise ordering, but in either case the placement itself is not invalidated just because it is going to be updated and was closed.

Special notes for your reviewer:

Does this PR introduce a user-facing and/or backwards incompatible change?:


Does this PR require updating code package or user-facing documentation?:


@vdarulis vdarulis requested review from abliqo and mway February 9, 2021 23:31
@codecov
Copy link

codecov bot commented Feb 9, 2021

Codecov Report

Merging #3201 (1e03a77) into master (0526c0f) will increase coverage by 1.0%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##           master    #3201     +/-   ##
=========================================
+ Coverage    71.2%    72.3%   +1.0%     
=========================================
  Files        1089     1087      -2     
  Lines      101883   100683   -1200     
=========================================
+ Hits        72558    72794    +236     
+ Misses      24206    22835   -1371     
+ Partials     5119     5054     -65     
Flag Coverage Δ
aggregator 75.8% <ø> (-0.1%) ⬇️
cluster 84.8% <ø> (-0.1%) ⬇️
collector 84.3% <ø> (ø)
dbnode 78.7% <ø> (+2.7%) ⬆️
m3em 74.4% <ø> (ø)
m3ninx 73.2% <ø> (-0.1%) ⬇️
metrics 20.0% <ø> (ø)
msg 74.0% <ø> (-0.2%) ⬇️
query 67.2% <ø> (ø)
x 80.2% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0526c0f...1e03a77. Read the comment docs.

@vdarulis vdarulis merged commit 82cecd5 into master Feb 10, 2021
@vdarulis vdarulis deleted the v/asp_atomic_2 branch February 10, 2021 03:08
soundvibe added a commit that referenced this pull request Feb 10, 2021
* master: (30 commits)
  [dbnode] Use go context to cancel index query workers after timeout (#3194)
  [aggregator] Fix change ActivePlacement semantics on close (#3201)
  [aggregator] Simplify (Active)StagedPlacement API (#3199)
  [aggregator] Checking if metadata is set to default should not cause copying (#3198)
  [dbnode] Remove readers and writer from aggregator API (#3122)
  [aggregator] Avoid large copies in entry rollup comparisons by making them more inline-friendly (#3195)
  [dbnode] Re-add aggregator doc limit update (#3137)
  [m3db] Do not close reader in filterFieldsIterator.Close() (#3196)
  Revert "Remove disk series read limit (#3174)" (#3193)
  [instrument] Improve sampled timer and stopwatch performance (#3191)
  Omit unset fields in metadata json (#3189)
  [dbnode] Remove left-over code in storage/bootstrap/bootstrapper (#3190)
  [dbnode][coordinator] Support match[] in label endpoints (#3180)
  Instrument the worker pool with the wait time (#3188)
  Instrument query path (#3182)
  [aggregator] Remove indirection, large copy from unaggregated protobuf decoder (#3186)
  [aggregator] Sample timers completely (#3184)
  [aggregator] Reduce error handling overhead in rawtcp server (#3183)
  [aggregator] Move shardID calculation out of critical section (#3179)
  Move instrumentation cleanup to FetchTaggedResultIterator Close() (#3173)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants