Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove disk series read limit #3174

Merged
merged 6 commits into from
Feb 5, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 1 addition & 27 deletions site/content/operational_guide/resource_limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,15 +53,6 @@ per second safely with your deployment and you want to use the default lookback
of `15s` then you would multiply 10,000 by 15 to get 150,000 as a max value with
a 15s lookback.

The third limit `maxRecentlyQueriedSeriesDiskRead` caps the series IDs matched by incoming
queries. This originally was distinct from the limit `maxRecentlyQueriedSeriesBlocks`, which
also limits the memory cost of specific series matched, because of an inefficiency
in how allocations would occur even for series known to not be present on disk for a given
shard. This inefficiency has been resolved https://github.com/m3db/m3/pull/3103 and therefore
this limit should be tracking memory cost linearly relative to `maxRecentlyQueriedSeriesBlocks`.
It is recommended to defer to using `maxRecentlyQueriedSeriesBlocks` over
`maxRecentlyQueriedSeriesDiskRead` given both should cap the resources similarly.

### Annotated configuration

```yaml
Expand Down Expand Up @@ -91,18 +82,6 @@ limits:
# and read until the lookback period resets.
lookback: 15s

# If set, will enforce a maximum on the series read from disk.
# This limit can be used to ensure queries that match an extremely high
# volume of series can be limited before even reading the underlying series data from disk.
maxRecentlyQueriedSeriesDiskRead:
# Value sets the maximum number of series read from disk.
value: 0
# Lookback sets the time window that this limit is enforced over, every
# lookback period the global count is reset to zero and when the limit
# is reached it will reject any further time series blocks being matched
# and read until the lookback period resets.
lookback: 15s

# If set then will limit the number of parallel write batch requests to the
# database and return errors if hit.
maxOutstandingWriteRequests: 0
Expand Down Expand Up @@ -133,11 +112,6 @@ curl -vvvsSf -X POST 0.0.0.0:7201/api/v1/kvstore -d '{
"lookbackSeconds":15,
"forceExceeded":false
},
"maxRecentlyQueriedSeriesDiskRead": {
"limit":0,
"lookbackSeconds":15,
"forceExceeded":false
}
},
"commit":true
}'
Expand Down Expand Up @@ -216,4 +190,4 @@ limits:
The following headers can also be used to override configured limits on a per request basis (to allow for different limits dependent on caller):


{{% fileinclude file="headers_optional_read_limits.md" %}}
{{% fileinclude file="headers_optional_read_limits.md" %}}
102 changes: 23 additions & 79 deletions src/cluster/generated/proto/kvpb/kv.pb.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion src/cluster/generated/proto/kvpb/kv.proto
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ message KeyValueUpdateResult {
message QueryLimits {
QueryLimit maxRecentlyQueriedSeriesBlocks = 1;
QueryLimit maxRecentlyQueriedSeriesDiskBytesRead = 2;
QueryLimit maxRecentlyQueriedSeriesDiskRead = 3;
reserved 3;
}

message QueryLimit {
Expand Down
1 change: 0 additions & 1 deletion src/cmd/services/m3dbnode/config/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -730,7 +730,6 @@ func TestConfiguration(t *testing.T) {
meta_event_reporting_enabled: false
limits:
maxRecentlyQueriedSeriesDiskBytesRead: null
maxRecentlyQueriedSeriesDiskRead: null
maxRecentlyQueriedSeriesBlocks: null
maxOutstandingWriteRequests: 0
maxOutstandingReadRequests: 0
Expand Down
6 changes: 0 additions & 6 deletions src/cmd/services/m3dbnode/config/limits.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,6 @@ type LimitsConfiguration struct {
// max is surpassed encounter an error.
MaxRecentlyQueriedSeriesDiskBytesRead *MaxRecentQueryResourceLimitConfiguration `yaml:"maxRecentlyQueriedSeriesDiskBytesRead"`

// MaxRecentlyQueriedSeriesDiskRead sets the upper limit on time series read from disk within a given lookback
// period. Queries which are issued while this max is surpassed encounter an error.
// This is the number of time series, which is different from the number of bytes controlled by
// MaxRecentlyQueriedSeriesDiskBytesRead.
MaxRecentlyQueriedSeriesDiskRead *MaxRecentQueryResourceLimitConfiguration `yaml:"maxRecentlyQueriedSeriesDiskRead"`

// MaxRecentlyQueriedSeriesBlocks sets the upper limit on time series blocks
// count within a given lookback period. Queries which are issued while this
// max is surpassed encounter an error.
Expand Down
10 changes: 0 additions & 10 deletions src/dbnode/network/server/tchannelthrift/node/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -811,7 +811,6 @@ func (s *service) FetchTaggedIter(ctx context.Context, req *rpc.FetchTaggedReque
nsID: ns,
tagEncoder: tagEncoder,
iOpts: s.opts.InstrumentOptions(),
blockReadLimit: s.queryLimits.DiskSeriesReadLimit(),
}), nil
}

Expand Down Expand Up @@ -856,7 +855,6 @@ type fetchTaggedResultsIter struct {
docReader *docs.EncodedDocumentReader
tagEncoder serialize.TagEncoder
iOpts instrument.Options
blocksReadLimit limits.LookbackLimit
}

type fetchTaggedResultsIterOpts struct {
Expand All @@ -868,7 +866,6 @@ type fetchTaggedResultsIterOpts struct {
nsID ident.ID
tagEncoder serialize.TagEncoder
iOpts instrument.Options
blockReadLimit limits.LookbackLimit
}

func newFetchTaggedResultsIter(opts fetchTaggedResultsIterOpts) FetchTaggedResultsIter { //nolint: gocritic
Expand All @@ -884,7 +881,6 @@ func newFetchTaggedResultsIter(opts fetchTaggedResultsIterOpts) FetchTaggedResul
docReader: opts.docReader,
tagEncoder: opts.tagEncoder,
iOpts: opts.iOpts,
blocksReadLimit: opts.blockReadLimit,
}

return iter
Expand All @@ -911,7 +907,6 @@ func (i *fetchTaggedResultsIter) Next(ctx context.Context) bool {
docReader: i.docReader,
tagEncoder: i.tagEncoder,
iOpts: i.iOpts,
blocksReadLimit: i.blocksReadLimit,
}
if i.fetchData {
// NB(r): Use a bytes ID here so that this ID doesn't need to be
Expand Down Expand Up @@ -941,10 +936,6 @@ func (i *fetchTaggedResultsIter) Next(ctx context.Context) bool {
for blockIter.Next(ctx) {
curr := blockIter.Current()
blockReaders = append(blockReaders, curr)
if err := i.blocksReadLimit.Inc(len(blockReaders), nil); err != nil {
i.err = err
return false
}
}
if blockIter.Err() != nil {
i.err = blockIter.Err()
Expand Down Expand Up @@ -988,7 +979,6 @@ type idResult struct {
tagEncoder serialize.TagEncoder
blockReadersIter series.BlockReaderIter
blockReaders [][]xio.BlockReader
blocksReadLimit limits.LookbackLimit
iOpts instrument.Options
}

Expand Down
11 changes: 1 addition & 10 deletions src/dbnode/network/server/tchannelthrift/node/service_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1558,11 +1558,6 @@ func TestServiceFetchTagged(t *testing.T) {
{
name: "happy path",
},
{
name: "block read limit",
blocksReadLimit: 1,
fetchErrMsg: "query aborted due to limit",
},
}

for _, tc := range testCases {
Expand All @@ -1577,11 +1572,7 @@ func TestServiceFetchTagged(t *testing.T) {
queryLimits, err := limits.NewQueryLimits(limits.NewOptions().
SetInstrumentOptions(testTChannelThriftOptions.InstrumentOptions()).
SetBytesReadLimitOpts(limits.DefaultLookbackLimitOptions()).
SetDocsLimitOpts(limits.DefaultLookbackLimitOptions()).
SetDiskSeriesReadLimitOpts(limits.LookbackLimitOptions{
Limit: tc.blocksReadLimit,
Lookback: time.Second * 1,
}))
SetDocsLimitOpts(limits.DefaultLookbackLimitOptions()))
require.NoError(t, err)
testTChannelThriftOptions = testTChannelThriftOptions.SetQueryLimits(queryLimits)

Expand Down
14 changes: 0 additions & 14 deletions src/dbnode/server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,6 @@ func Run(runOpts RunOptions) {
// Setup query stats tracking.
docsLimit := limits.DefaultLookbackLimitOptions()
bytesReadLimit := limits.DefaultLookbackLimitOptions()
diskSeriesReadLimit := limits.DefaultLookbackLimitOptions()
if limitConfig := runOpts.Config.Limits.MaxRecentlyQueriedSeriesBlocks; limitConfig != nil {
docsLimit.Limit = limitConfig.Value
docsLimit.Lookback = limitConfig.Lookback
Expand All @@ -460,14 +459,9 @@ func Run(runOpts RunOptions) {
bytesReadLimit.Limit = limitConfig.Value
bytesReadLimit.Lookback = limitConfig.Lookback
}
if limitConfig := runOpts.Config.Limits.MaxRecentlyQueriedSeriesDiskRead; limitConfig != nil {
diskSeriesReadLimit.Limit = limitConfig.Value
diskSeriesReadLimit.Lookback = limitConfig.Lookback
}
limitOpts := limits.NewOptions().
SetDocsLimitOpts(docsLimit).
SetBytesReadLimitOpts(bytesReadLimit).
SetDiskSeriesReadLimitOpts(diskSeriesReadLimit).
SetInstrumentOptions(iOpts)
if builder := opts.SourceLoggerBuilder(); builder != nil {
limitOpts = limitOpts.SetSourceLoggerBuilder(builder)
Expand Down Expand Up @@ -1214,16 +1208,12 @@ func updateQueryLimits(logger *zap.Logger,
// Default to the config-based limits if unset in dynamic limits.
// Otherwise, use the dynamic limit.
docsLimitOpts = configOpts.DocsLimitOpts()
diskSeriesReadLimitOpts = configOpts.DiskSeriesReadLimitOpts()
bytesReadLimitOpts = configOpts.BytesReadLimitOpts()
)
if dynamicOpts != nil {
if dynamicOpts.MaxRecentlyQueriedSeriesBlocks != nil {
docsLimitOpts = dynamicLimitToLimitOpts(dynamicOpts.MaxRecentlyQueriedSeriesBlocks)
}
if dynamicOpts.MaxRecentlyQueriedSeriesDiskRead != nil {
diskSeriesReadLimitOpts = dynamicLimitToLimitOpts(dynamicOpts.MaxRecentlyQueriedSeriesDiskRead)
}
if dynamicOpts.MaxRecentlyQueriedSeriesDiskBytesRead != nil {
bytesReadLimitOpts = dynamicLimitToLimitOpts(dynamicOpts.MaxRecentlyQueriedSeriesDiskBytesRead)
}
Expand All @@ -1233,10 +1223,6 @@ func updateQueryLimits(logger *zap.Logger,
logger.Error("error updating docs limit", zap.Error(err))
}

if err := updateQueryLimit(queryLimits.DiskSeriesReadLimit(), diskSeriesReadLimitOpts); err != nil {
logger.Error("error updating series read limit", zap.Error(err))
}

if err := updateQueryLimit(queryLimits.BytesReadLimit(), bytesReadLimitOpts); err != nil {
logger.Error("error updating bytes read limit", zap.Error(err))
}
Expand Down
Loading