-
Notifications
You must be signed in to change notification settings - Fork 104
Conversation
fcc670e
to
83d125c
Compare
a02edb9
to
9a49090
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to my untrained eye
api/dataprocessor.go
Outdated
return mdata.MetricResult{ | ||
Oldest: ctx.Req.To, | ||
Iters: make([]chunk.Iter, 0), | ||
Raw: make([]schema.Point, 0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needless allocations
conf/aggregations.go
Outdated
@@ -85,6 +91,38 @@ func ReadAggregations(file string) (Aggregations, error) { | |||
} | |||
} | |||
|
|||
writeBufferStr := s.ValueOf("writeBuffer") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this format documented? there's 2 files to update:
./scripts/config/storage-aggregation.conf
./docker/docker-dev-custom-cfg-kafka/storage-aggregation.conf
also run scripts/config-to-doc.sh to update docs/config.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course you're right that this stuff needs to be documented. I just prefer to first get the code into a mergable state and then document the changes, because until the code is good enough to merge I don't know how much of it will change again and then I'll always need to keep the documentation up2date again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. it looks like the ReorderWindow could be nicer by calling dur.ParseNDuration
or dur.ParseDuration
as it allows for humanfriendly syntax like 10min30s
, we use this approach already in a bunch of other options like chunkspan settings (note: this was just renamed, in the version this PR is based on it was still called dur.dur.ParseUNsec
so i suggest you rebase on top of master)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That just reminded me of something:
If we allow that to be specified as a time we'll need to enforce that this time is a multiple of the interval. This reminded me that the rollups have a different intervals than the first aggregate, so when the write buffer is instantiated for the rollups the reorder window should be divided by raw interval / aggregate interval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so when the write buffer is instantiated for the rollups
what exactly do you mean with rollups? The AggMetric representing a rollup for another AggMetric? that one does not have the reorder window. the reordering only happens on the "main" AggMetric (not any instantiated by its Aggregators) which I think is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, it wouldn't create a write buffer for the AggMetric that represents a rollup, which is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if we'd allow a time duration to be specified, then we would have to enforce that it is a multiple of the raw interval. Do you think that makes sense? I'm not sure if it makes sense or not, as it might also just lead to unnecessary confusion if we error on a reorder window duration that does not satisfy this requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see no reason for this requirement. am i missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise how would that be implemented? Just always ceil()
it to the next multiple of the raw interval? I guess that would work too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah
mdata/aggmetric.go
Outdated
@@ -402,11 +429,24 @@ func (a *AggMetric) persist(pos int) { | |||
return | |||
} | |||
|
|||
// don't ever call with a ts of 0, cause we use 0 to mean not initialized! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment should stay for both add and Add.
my main concern with this implementation is all the use of pointers. for the following reasons:
the last point is something I don't know that much about but following pointer references is usually more cache-unfriendly and causes more cpu time compared to eg loading values out of a flat slice/array. I don't know if this point matters much or is very strong, but I do feel more strongly about the other two, let me go into more detail below, and i'll also describe a suggestion for an alternative solution. 1) GC overheadthe golang GC workload is proportional to the number of heap objects. We already have a large amount of objects, stressing our GC. I think this implementation would add a significant amount of objects (more than 50% in the example below) which I think is too much. I think it will also impact fragmentation and hence probably duration of allocations. AFAIK every object pointed to, every slice backed by an array, and every string, and maybe also every map, causes 1 heap object. (and also function callbacks I think?)
below I attempt to simplify this structure a bit and apply the numbers for our ops cluster, it's a bit hard to follow, but you can skip to the outcomes below if you want. this is for 1 AggMetric, so needs to be multiplied by
so in formula this becomes:
I'm a bit off from the 40M but i also took some shortcuts (not accounting for the cache mainly I think) I think our target goal of number of heap objects should be only a few per metric. it definitely shouldn't scale up with the number of points. 2) space usagethe write buffer uses entries (heap objects) that have 2 pointers per point, so uint32, float64 and 2 pointers of 8B each is 28B (which i believe will be padded to 32B by the compiler), so what could have been 12B needs 32B which is a 166% space increase suggested alternativeWe can tackle point 1 and 3, and potentially also point 2 with a slice based approach. How about something like:
we can navigate the slice as a circular buffer. if we know the data resolution in advance we can perfectly size it up front, if we don't we can start small and increase the size as we need more space to accomodate the needed window size. this results in much less pointers so less GC and other performance problems , and in terms of space usage, it depends how sparse the data is. if the resolution is 10s and there's data for every point 10s apart (just out of order) or 1 out of every 2 points then this will use less space, but if there's 2 missing points (or more) for every known point, then your approach will be more space efficient I think |
I think you're probably right that the GC overhead might become a problem if there are too many of these objects. Maybe I'm still too much in the C world where there's no GC overhead. Note that this WB can be configured on a per pattern basis, so we don't have to have it on all aggmetrics, if only some receive out of order data due to reasons that are out of our control we can just enable it for those patterns. The circular buffer idea is good, although as you already said it only works well if the data is not very sparse. We could even have both implementations (because the interfaces would probably be pretty much the same) and let the user configure which they want depending on the sparsity of the data, although that would probably make it too complicated. I'll give the circular buffer idea a try |
24269ab
to
3914f28
Compare
In the latest commit I've basically reimplemented the whole write buffer, and it still passes pretty much the same tests :)
|
b48edad
to
b55bdf7
Compare
mdata/aggmetric_test.go
Outdated
for { | ||
if i == len(c.points) { | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe use for i != len(c.points) {
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/aggmetric_test.go
Outdated
func (a points) Len() int { return len(a) } | ||
func (a points) Swap(i, j int) { a[i], a[j] = a[j], a[i] } | ||
func (a points) Less(i, j int) bool { return a[i].ts < a[j].ts } | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the naming convention in golang seems to be that a type whose purpose is to enable sorting, is that that type is named after how the sorting is done. for example see the ByAge
type at the first example at https://golang.org/pkg/sort/
So this would be called ByTs and then the call will look nicer (sort.Sort(ByTs(c.points))
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 😎
mdata/init.go
Outdated
// ts is not older than the 60th datapoint counting from the newest. | ||
metricsReordered = stats.NewCounter32("tank.metrics_reorderd") | ||
|
||
// metric tank.metrics_too_old is points that go back in time beyond the scope of the reorder window. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reorder window is optional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/init.go
Outdated
// within the reorder window. in such a case they will be inserted in the correct order. | ||
// E.g. if the reorder window is 60 (datapoints) then points may be inserted at random order as long as their | ||
// ts is not older than the 60th datapoint counting from the newest. | ||
metricsReordered = stats.NewCounter32("tank.metrics_reorderd") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo, should be reordered
not reorderd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this metric needs to be added to docs/metrics.md
(note the alphabetical ordering in that file)
mdata/write_buffer.go
Outdated
break | ||
} | ||
oldest = (oldest + 1) % wb.len | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wonder if it's worthwhile to rewrite this as a one-or-two-appends instead of a loop and appending a point each time. the perf difference is probably marginal but worth trying and checking.
also, since we're using schema.Point
already, why do we still need the entry
type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not sure what you mean with a one-or-two append.
but you're right that we might as well just get rid of entry
. then the producing of the result would be much easier, because the internal storage type matches the returned type, so all that's needed is to copy the existing points
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not sure what you mean with a one-or-two append.
well basically the circular buffer is a slice that is either fully ordered, or has a break somewhere in the slice where the ordering restarts right? so i'm just saying we could append the ordered sections, of which there is typically 2. from break to end, and from begin to break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, yes. the problem is that the buffer might be not full or have gaps in-between. so the 0
s need to be filtered out, which is what i did in the latest version.
mdata/write_buffer.go
Outdated
* which is thread safe, so there is no locking in the buffer. | ||
*/ | ||
|
||
type WriteBuffer struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this name a bit too ambiguous, especially since the options are called "reorder-foo" and it's confusing to talk about a reorder thing and a writebuffer thing but they're the same, so ReorderBuffer is a lot more clear IMHO. (BTW if you use vim-go
check out go-rename, well check out go rename either way)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/ifaces.go
Outdated
GetAggregated(consolidator consolidation.Consolidator, aggSpan, from, to uint32) MetricResult | ||
} | ||
|
||
type MetricResult struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this type is not an interface, so should go into another file, maybe metricresult.go ? actually, since the relation to metrics is already implied due to it being in mdata, I think we can simply call this Result
and put in result.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also I think this warrents some more documentation, it's not really clear what "Raw" is for example and how it relates to Iters. also if the slice of iters is called iters, why is the slice of points called raw and not points?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's named "Raw" because there are raw points, while the "Iters" contains compressed points. Naming it "Points" might be confusing because both of them contain points, the difference is just that one of them has them raw and the other has them compressed. But yeah, then it should be called "Raw" and "Compressed":) I can also name it points and iters, that's fine anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for points and iters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, naming it Points
and Iters
seems like there's still a bunch of places where the terminology "writebuffer" instead of reorderbuffer is used |
mdata/aggmetric.go
Outdated
@@ -26,6 +28,7 @@ type AggMetric struct { | |||
cachePusher cache.CachePusher | |||
sync.RWMutex | |||
Key string | |||
wb *ReorderBuffer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:GoRename rb
if you use vim-go :) or maybe reorderBuf
or something. but not wb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/aggmetric.go
Outdated
result := MetricResult{ | ||
Oldest: math.MaxInt32, | ||
Iters: make([]chunk.Iter, 0), | ||
Raw: make([]schema.Point, 0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't think we have to allocate here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, missed that
mdata/write_buffer.go
Outdated
ts = aggBoundary(ts, wb.interval) | ||
|
||
// out of order and too old | ||
if wb.buf[wb.newest].Ts != 0 && ts <= wb.buf[wb.newest].Ts-(wb.len*wb.interval) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to allow for a window that is not a multiple of interval, this can just compare against the window size instead of doing the multiplication at every add
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming that the reorder window size is defined as a time, and not a number of data points this would make sense. But actually I still feel like for a user it's easier to understand if the reorder window is defined by the number of data points and not a written time span.
mdata/write_buffer.go
Outdated
newest uint32 // index of newest buffer entry | ||
interval uint32 // metric interval | ||
buf []schema.Point // the actual buffer holding the data | ||
flush func(uint32, float64) // flushCount callback |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not sure exactly what is the cost of having a value of type func (remember, multiplied for every single metric that has a reorderbuffer), but it's gotta be at least a few bytes. probably internally a pointer too, causing some GC overhead.
my suggestion is to remove this callback rather Add
can return a slice of points, so that the caller (AggMetric) can just add all the points that it gets back from the RB. This way Add
becomes more of a "Process". a point goes in, and 0 or more come out. I think this should be a bit more efficient, at least it makes the API cleaner IMHO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW just had a chat on #performance on gophers slack
(https://gophers.slack.com/archives/C0VP8EF3R/p1499947031935022)
and function variables are internally uintptr's so they don't cause GC overhead, however they will take 8B of space. so if we have 10M metrics with each a RB then that's 80MB of pointers all pointing to the same memory location, so yeah the Process()
idiom seems nicer to me too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, changing it
mdata/write_buffer.go
Outdated
} | ||
|
||
for i := uint32(0); i < flushCount; i++ { | ||
if wb.buf[oldest].Ts != 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not track oldest
as a member attribute of rb? then you don't have to loop and check for zeroes all the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to check for zeroes because there might be gaps in the data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah of course.
mdata/aggmetric_test.go
Outdated
c.Add(135, 135) | ||
c.Verify(true, 100, 199, 101, 135) | ||
|
||
// add new ranges, aligned and unaligned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does "aligned" mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to rewrite those comments, they are old
e75f6c4
to
95fa1c2
Compare
840ecc8
to
d078406
Compare
docs/config.md
Outdated
@@ -415,6 +415,7 @@ retentions = 1s:35d:10min:7 | |||
# * aggregationMethod specifies the functions used to aggregate values for the next retention level. Legal methods are avg/average, sum, min, max, and last. The default is average. | |||
# Unlike Graphite, you can specify multiple, as it is often handy to have different summaries available depending on what analysis you need to do. | |||
# When using multiple, the first one is used for reading. In the future, we will add capabilities to select the different archives for reading. | |||
# * reorderBuffer an optional buffer that temporarily keeps data points in memory as raw data and allows insertion at random order. The specified value is how many datapoints, at a defined metric interval, should be kept before they are flushed out. This is useful if the metric producers cannot guarantee that the data will arrive in order, but it is relatively memory intensive. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"at a defined interval" is vague. should probably be "based on the raw interval specified in the first defined retention", and actually since it's related to interval it should go in storage-schemas.conf not storage-aggregation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/reorder_buffer.go
Outdated
* which is thread safe, so there is no locking in the buffer. | ||
*/ | ||
|
||
type ReorderBuffer struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The convention is simple: to document a type, variable, constant, function, or even a package, write a regular comment directly preceding its declaration, with no intervening blank line. Godoc will then present that comment as text alongside the item it documents. For example, this is the documentation for the fmt package's Fprint function:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't parse right.
see for yourself by running this in the MT dir:
go get golang.org/x/tools/cmd/godoc
godoc -http=":6060"
and go to http://localhost:6060/pkg/github.com/raintank/metrictank/mdata/#ReorderBuffer
it should be:
// ReorderBuffer keeps a window of data during which it is ok to send data out of order.
// Once the reorder window has passed Add() will return the old data and delete it from the buffer.
// The reorder buffer itself is not thread safe because it is only used by AggMetric,
// which is thread safe, so there is no locking in the buffer.
type ReorderBuffer struct {
mdata/aggmetric_test.go
Outdated
c.t.Fatalf("Points: Values()=(%v,%v), want end of stream\n", point.Ts, point.Val) | ||
} | ||
if c.points[index].ts != point.Ts || c.points[index].val != point.Val { | ||
fmt.Println(res.Points) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stray println
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/reorder_buffer_test.go
Outdated
addRes := b.Add(point.Ts, point.Val) | ||
flushed = append(flushed, addRes...) | ||
} | ||
if expectAddFail && metricsTooOld.Peek() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't we just make expectAddFail an int, that way you can just check expectedFails != metricsTooOld.Peek()
which is simpler + allows for tests that want multiple failures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
mdata/reorder_buffer_test.go
Outdated
returned := b.Get() | ||
|
||
if !reflect.DeepEqual(expectedData, returned) { | ||
t.Fatal(fmt.Sprintf("Returned data does not match expected data\n%+v\n %+v", testData, expectedData)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just use t.Fatalf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems to make the tests fail https://circleci.com/gh/raintank/metrictank/2413
b6fd320
to
7afb50f
Compare
7afb50f
to
03bae1e
Compare
@@ -116,6 +117,19 @@ func ReadSchemas(file string) (Schemas, error) { | |||
} | |||
schema.Priority = int64(p)<<32 - int64(i) // to sort records with same priority by position in file | |||
|
|||
reorderBufferStr := sec.ValueOf("reorderBuffer") | |||
if len(reorderBufferStr) > 0 { | |||
reorderWindow, err := strconv.ParseUint(reorderBufferStr, 10, 32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the code we talkabout "reorderWindow" everywhere for this parameter, but the setting in the config file is reorderBuffer, why? I think it should be reorderWindow also. especially since later more config options could follow for the reorderBuffer. and "reorderBuffer" is the thing doing the reordering, not the setting for the window size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also i see docker/docker-dev-custom-cfg-kafka/storage-aggregation.conf does not have the comment that scripts/config/storage-schemas.conf has
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we'd add options to the reorderBuffer wouldn't they most likely be appended to the current one, in a similar style like the schema string?
F.e. previously, when the reorder buffer still had the flushMin
, the setting was like reorderBuffer=<reorder window>:<flush min>
. I'm fine changing it to reorder window if you prefer, but that means that if we later add more configs to the reorder buffer that would probably either have to be renamed or we'd need to add them as additional separate parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah good point. ok let's leave it then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updating the comment in docker/docker-dev-custom-cfg-kafka/storage-aggregation.conf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually it's the same for the two storage-schemas.conf
in:
docker/docker-dev-custom-cfg-kafka/storage-schemas.conf
docker/docker-cluster/storage-schemas.conf
Should i copy the comment from scripts/config/storage-schemas.conf
to all of them while i'm at it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah. scripts/sync-configs.sh helps with keeping all ini's in sync across the repo, but doing it for the *.conf ones is still manual :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pushed an update
The purpose of this reorder buffer is to provide the ability to reorder data that arrived out of order during a limited time window.
closes #663