-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
output/cloudv2: Flush chunks #3108
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3108 +/- ##
==========================================
+ Coverage 73.74% 73.82% +0.07%
==========================================
Files 241 243 +2
Lines 18460 18474 +14
==========================================
+ Hits 13614 13638 +24
+ Misses 3972 3966 -6
+ Partials 874 870 -4
Flags with carried forward coverage won't be shown. Click here to find out more.
|
b665323
to
bc16911
Compare
if len(msb.seriesIndex) < f.maxSeriesInSingleBatch { | ||
continue | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was bit weary of an if in this loop ... but to be honest addTimeBucket
is so much more complicated that this is likely not an issue.
If it is we can always optimize it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Left a nitpick that is unrelated to the current PR
bc16911
to
6e63afd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't fully grok the new cloud output, but this change LGTM.
If the number of time series is higher than the maximum batch size than split them in chunks.
Part of #3117