Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvserver: make some cluster settings system only #98353

Merged
merged 1 commit into from
Mar 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 0 additions & 11 deletions docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,19 +36,9 @@ feature.restore.enabled boolean true set to true to enable restore, false to dis
feature.schema_change.enabled boolean true set to true to enable schema changes, false to disable; default is true
feature.stats.enabled boolean true set to true to enable CREATE STATISTICS/ANALYZE, false to disable; default is true
jobs.retention_time duration 336h0m0s the amount of time for which records for completed jobs are retained
kv.bulk_io_write.max_rate byte size 1.0 TiB the rate limit (bytes/sec) to use for writes to disk on behalf of bulk io ops
kv.bulk_sst.max_allowed_overage byte size 64 MiB if positive, allowed size in excess of target size for SSTs from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.bulk_sst.target_size byte size 16 MiB target size for SSTs emitted from export requests; export requests (i.e. BACKUP) may buffer up to the sum of kv.bulk_sst.target_size and kv.bulk_sst.max_allowed_overage in memory
kv.closed_timestamp.follower_reads_enabled boolean true allow (all) replicas to serve consistent historical reads based on closed timestamp information
kv.log_range_and_node_events.enabled boolean true set to true to transactionally log range events (e.g., split, merge, add/remove voter/non-voter) into system.rangelogand node join and restart events into system.eventolog
kv.protectedts.reconciliation.interval duration 5m0s the frequency for reconciling jobs with protected timestamp records
kv.range_split.by_load_enabled boolean true allow automatic splits of ranges based on where load is concentrated
kv.range_split.load_cpu_threshold duration 500ms the CPU use per second over which, the range becomes a candidate for load based splitting
kv.range_split.load_qps_threshold integer 2500 the QPS over which, the range becomes a candidate for load based splitting
kv.rangefeed.enabled boolean false if set, rangefeed registration is enabled
kv.rangefeed.range_stuck_threshold duration 1m0s restart rangefeeds if they don't emit anything for the specified threshold; 0 disables (kv.closed_timestamp.side_transport_interval takes precedence)
kv.replica_stats.addsst_request_size_factor integer 50000 the divisor that is applied to addsstable request sizes, then recorded in a leaseholders QPS; 0 means all requests are treated as cost 1
kv.replication_reports.interval duration 1m0s the frequency for generating the replication_constraint_stats, replication_stats_report and replication_critical_localities reports (set to 0 to disable)
kv.transaction.max_intents_bytes integer 4194304 maximum number of bytes used to track locks in transactions
kv.transaction.max_refresh_spans_bytes integer 4194304 maximum number of bytes used to track refresh spans in serializable transactions
kv.transaction.reject_over_max_intents_budget.enabled boolean false if set, transactions that exceed their lock tracking budget (kv.transaction.max_intents_bytes) are rejected instead of having their lock spans imprecisely compressed
Expand Down Expand Up @@ -82,7 +72,6 @@ server.oidc_authentication.scopes string openid sets OIDC scopes to include with
server.rangelog.ttl duration 720h0m0s if nonzero, entries in system.rangelog older than this duration are periodically purged
server.shutdown.connection_wait duration 0s the maximum amount of time a server waits for all SQL connections to be closed before proceeding with a drain. (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.drain_wait duration 0s the amount of time a server waits in an unready state before proceeding with a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting. --drain-wait is to specify the duration of the whole draining process, while server.shutdown.drain_wait is to set the wait time for health probes to notice that the node is not ready.)
server.shutdown.lease_transfer_wait duration 5s the timeout for a single iteration of the range lease transfer phase of draining (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.shutdown.query_wait duration 10s the timeout for waiting for active queries to finish during a drain (note that the --drain-wait parameter for cockroach node drain may need adjustment after changing this setting)
server.time_until_store_dead duration 5m0s the time after which if there is no new gossiped information about a store, it is considered dead
server.user_login.cert_password_method.auto_scram_promotion.enabled boolean true whether to automatically promote cert-password authentication to use SCRAM
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/allocator/storepool/store_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ const (
// replicate queue will not consider stores which have failed a reservation a
// viable target.
var FailedReservationsTimeout = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"server.failed_reservation_timeout",
"the amount of time to consider the store throttled for up-replication after a failed reservation call",
5*time.Second,
Expand All @@ -59,7 +59,7 @@ const timeAfterStoreSuspectSettingName = "server.time_after_store_suspect"
// TimeAfterStoreSuspect measures how long we consider a store suspect since
// it's last failure.
var TimeAfterStoreSuspect = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
timeAfterStoreSuspectSettingName,
"the amount of time we consider a store suspect for after it fails a node liveness heartbeat."+
" A suspect node would not receive any new replicas or lease transfers, but will keep the replicas it has.",
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/batcheval/cmd_export.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ const SSTTargetSizeSetting = "kv.bulk_sst.target_size"
// ExportRequestTargetFileSize controls the target file size for SSTs created
// during backups.
var ExportRequestTargetFileSize = settings.RegisterByteSizeSetting(
settings.TenantWritable,
settings.SystemOnly,
SSTTargetSizeSetting,
fmt.Sprintf("target size for SSTs emitted from export requests; "+
"export requests (i.e. BACKUP) may buffer up to the sum of %s and %s in memory",
Expand All @@ -55,7 +55,7 @@ const MaxExportOverageSetting = "kv.bulk_sst.max_allowed_overage"
// and an SST would exceed this size (due to large rows or large numbers of
// versions), then the export will fail.
var ExportRequestMaxAllowedFileSizeOverage = settings.RegisterByteSizeSetting(
settings.TenantWritable,
settings.SystemOnly,
MaxExportOverageSetting,
fmt.Sprintf("if positive, allowed size in excess of target size for SSTs from export requests; "+
"export requests (i.e. BACKUP) may buffer up to the sum of %s and %s in memory",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import (
// QueryResolvedTimestampIntentCleanupAge configures the minimum intent age that
// QueryResolvedTimestamp requests will consider for async intent cleanup.
var QueryResolvedTimestampIntentCleanupAge = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.query_resolved_timestamp.intent_cleanup_age",
"minimum intent age that QueryResolvedTimestamp requests will consider for async intent cleanup",
10*time.Second,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/closedts/setting.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ var SideTransportCloseInterval = settings.RegisterDurationSetting(
// (see TargetForPolicy), if it is set to a non-zero value. Meant as an escape
// hatch.
var LeadForGlobalReadsOverride = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.closed_timestamp.lead_for_global_reads_override",
"if nonzero, overrides the lead time that global_read ranges use to publish closed timestamps",
0,
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/concurrency/concurrency_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ import (
// utilization and runaway queuing for misbehaving clients, a role it is well
// positioned to serve.
var MaxLockWaitQueueLength = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.lock_table.maximum_lock_wait_queue_length",
"the maximum length of a lock wait-queue that read-write requests are willing "+
"to enter and wait in. The setting can be used to ensure some level of quality-of-service "+
Expand Down Expand Up @@ -93,7 +93,7 @@ var MaxLockWaitQueueLength = settings.RegisterIntSetting(
// discoveredCount > 100,000, caused by stats collection, where we definitely
// want to avoid adding these locks to the lock table, if possible.
var DiscoveredLocksThresholdToConsultFinalizedTxnCache = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.lock_table.discovered_locks_threshold_for_consulting_finalized_txn_cache",
"the maximum number of discovered locks by a waiter, above which the finalized txn cache"+
"is consulted and resolvable locks are not added to the lock table -- this should be a small"+
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/concurrency/lock_table_waiter.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ import (
// LockTableLivenessPushDelay sets the delay before pushing in order to detect
// coordinator failures of conflicting transactions.
var LockTableLivenessPushDelay = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.lock_table.coordinator_liveness_push_delay",
"the delay before pushing in order to detect coordinator failures of conflicting transactions",
// This is set to a short duration to ensure that we quickly detect failed
Expand Down Expand Up @@ -71,7 +71,7 @@ var LockTableLivenessPushDelay = settings.RegisterDurationSetting(
// LockTableDeadlockDetectionPushDelay sets the delay before pushing in order to
// detect dependency cycles between transactions.
var LockTableDeadlockDetectionPushDelay = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.lock_table.deadlock_detection_push_delay",
"the delay before pushing in order to detect dependency cycles between transactions",
// This is set to a medium duration to ensure that deadlock caused by
Expand Down
6 changes: 3 additions & 3 deletions pkg/kv/kvserver/gc/gc.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ const (
// IntentAgeThreshold is the threshold after which an extant intent
// will be resolved.
var IntentAgeThreshold = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.gc.intent_age_threshold",
"intents older than this threshold will be resolved when encountered by the MVCC GC queue",
2*time.Hour,
Expand Down Expand Up @@ -106,7 +106,7 @@ var TxnCleanupThreshold = settings.RegisterDurationSetting(
// of writing. This value is subject to tuning in real environment as we have
// more data available.
var MaxIntentsPerCleanupBatch = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.gc.intent_cleanup_batch_size",
"if non zero, gc will split found intents into batches of this size when trying to resolve them",
5000,
Expand All @@ -125,7 +125,7 @@ var MaxIntentsPerCleanupBatch = settings.RegisterIntSetting(
// The default value is a conservative limit to prevent pending intent key sizes
// from ballooning.
var MaxIntentKeyBytesPerCleanupBatch = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.gc.intent_cleanup_batch_byte_size",
"if non zero, gc will split found intents into batches of this size when trying to resolve them",
1e6,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/kvserverbase/base.go
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ func IntersectSpan(

// SplitByLoadMergeDelay wraps "kv.range_split.by_load_merge_delay".
var SplitByLoadMergeDelay = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range_split.by_load_merge_delay",
"the delay that range splits created due to load will wait before considering being merged away",
5*time.Minute,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/kvserverbase/syncing_write.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ func LimitBulkIOWrite(ctx context.Context, limiter *rate.Limiter, cost int) erro

// sstWriteSyncRate wraps "kv.bulk_sst.sync_size". 0 disables syncing.
var sstWriteSyncRate = settings.RegisterByteSizeSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.bulk_sst.sync_size",
"threshold after which non-Rocks SST writes must fsync (0 disables)",
BulkIOWriteBurst,
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/logstore/logstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ import (
)

var disableSyncRaftLog = settings.RegisterBoolSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.raft_log.disable_synchronization_unsafe",
"set to true to disable synchronization on Raft log writes to persistent storage. "+
"Setting to true risks data loss or data corruption on server crashes. "+
Expand All @@ -47,7 +47,7 @@ var disableSyncRaftLog = settings.RegisterBoolSetting(
)

var enableNonBlockingRaftLogSync = settings.RegisterBoolSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.raft_log.non_blocking_synchronization.enabled",
"set to true to enable non-blocking synchronization on Raft log writes to "+
"persistent storage. Setting to true does not risk data loss or data corruption "+
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/protectedts/settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ var MaxBytes = settings.RegisterIntSetting(
// MaxSpans controls the maximum number of spans which can be protected
// by all protected timestamp records.
var MaxSpans = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.protectedts.max_spans",
"if non-zero the limit of the number of spans which can be protected",
32768,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/raft_transport.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ const (

// targetRaftOutgoingBatchSize wraps "kv.raft.command.target_batch_size".
var targetRaftOutgoingBatchSize = settings.RegisterByteSizeSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.raft.command.target_batch_size",
"size of a batch of raft commands after which it will be sent without further batching",
64<<20, // 64 MB
Expand Down
4 changes: 2 additions & 2 deletions pkg/kv/kvserver/replica_backpressure.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ var backpressureLogLimiter = log.Every(500 * time.Millisecond)
// range's size must grow to before backpressure will be applied on writes. Set
// to 0 to disable backpressure altogether.
var backpressureRangeSizeMultiplier = settings.RegisterFloatSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range.backpressure_range_size_multiplier",
"multiple of range_max_bytes that a range is allowed to grow to without "+
"splitting before writes to that range are blocked, or 0 to disable",
Expand Down Expand Up @@ -66,7 +66,7 @@ var backpressureRangeSizeMultiplier = settings.RegisterFloatSetting(
// currently backpressuring than ranges which are larger but are not
// applying backpressure.
var backpressureByteTolerance = settings.RegisterByteSizeSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range.backpressure_byte_tolerance",
"defines the number of bytes above the product of "+
"backpressure_range_size_multiplier and the range_max_size at which "+
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/replica_follower_read.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ import (
// information is collected and passed around, regardless of the value of this
// setting.
var FollowerReadsEnabled = settings.RegisterBoolSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.closed_timestamp.follower_reads_enabled",
"allow (all) replicas to serve consistent historical reads based on closed timestamp information",
true,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/replica_rangefeed.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ var RangefeedEnabled = settings.RegisterBoolSetting(
// RangeFeedRefreshInterval controls the frequency with which we deliver closed
// timestamp updates to rangefeeds.
var RangeFeedRefreshInterval = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.rangefeed.closed_timestamp_refresh_interval",
"the interval at which closed-timestamp updates"+
"are delivered to rangefeeds; set to 0 to use kv.closed_timestamp.side_transport_interval",
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/replica_send.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ import (
)

var optimisticEvalLimitedScans = settings.RegisterBoolSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.concurrency.optimistic_eval_limited_scans.enabled",
"when true, limited scans are optimistically evaluated in the sense of not checking for "+
"conflicting latches or locks up front for the full key range of the scan, and instead "+
Expand Down
6 changes: 3 additions & 3 deletions pkg/kv/kvserver/replica_split_load.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,15 @@ import (

// SplitByLoadEnabled wraps "kv.range_split.by_load_enabled".
var SplitByLoadEnabled = settings.RegisterBoolSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range_split.by_load_enabled",
"allow automatic splits of ranges based on where load is concentrated",
true,
).WithPublic()

// SplitByLoadQPSThreshold wraps "kv.range_split.load_qps_threshold".
var SplitByLoadQPSThreshold = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range_split.load_qps_threshold",
"the QPS over which, the range becomes a candidate for load based splitting",
2500, // 2500 req/s
Expand All @@ -53,7 +53,7 @@ var SplitByLoadQPSThreshold = settings.RegisterIntSetting(
// measured as max ops/s for kv and resource balance for allocbench. See #96869
// for more details.
var SplitByLoadCPUThreshold = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.range_split.load_cpu_threshold",
"the CPU use per second over which, the range becomes a candidate for load based splitting",
500*time.Millisecond,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/replica_write.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ import (
// TODO(erikgrinaker): this, and the timeout handling, should be moved into a
// migration helper that manages checkpointing and retries as well.
var migrateApplicationTimeout = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.migration.migrate_application.timeout",
"timeout for a Migrate request to be applied across all replicas of a range",
1*time.Minute,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/replicastats/replica_stats.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ const (
// SSTable data, divided by this factor. Thereby, the magnitude of this factor
// is inversely related to QPS sensitivity to AddSSTableRequests.
var AddSSTableRequestSizeFactor = settings.RegisterIntSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.replica_stats.addsst_request_size_factor",
"the divisor that is applied to addsstable request sizes, then recorded in a leaseholders QPS; 0 means all requests are treated as cost 1",
// The default value of 50,000 was chosen as the default divisor, following manual testing that
Expand Down
2 changes: 1 addition & 1 deletion pkg/kv/kvserver/reports/reporter.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ import (
// ReporterInterval is the interval between two generations of the reports.
// When set to zero - disables the report generation.
var ReporterInterval = settings.RegisterDurationSetting(
settings.TenantWritable,
settings.SystemOnly,
"kv.replication_reports.interval",
"the frequency for generating the replication_constraint_stats, replication_stats_report and "+
"replication_critical_localities reports (set to 0 to disable)",
Expand Down
Loading