-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
c-deps: bump rocksdb for multiple backported PRs #37172
Conversation
Includes the following changes, all of which have landed upstream. - cockroachdb/rocksdb#27: "ldb: set `total_order_seek` for scans" - cockroachdb/rocksdb#28: "Fix cockroachdb#3840: only `SyncClosedLogs` for multiple CFs" - cockroachdb/rocksdb#29: "Optionally wait on bytes_per_sync to smooth I/O" - cockroachdb/rocksdb#30: "Option string/map/file can set env from object registry" Also made the RocksDB changes that we decided in cockroachdb#34897: - Do not sync WAL before installing flush result. This is achieved by backporting cockroachdb/rocksdb#28; no configuration change is necessary. - Do not sync WAL ever for temp stores. This is achieved by setting `wal_bytes_per_sync = 0`. - Limit size of final syncs when generating SSTs. This is achieved by backporting cockroachdb/rocksdb#29 and turning it on with `strict_bytes_per_sync = true`. Release note: None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
options.strict_bytes_per_sync = true; | ||
// Do not sync the WAL periodically. We sync it every write already by calling | ||
// `FlushWAL(true)` on non-temp stores. On the temp store we do not intend to | ||
// sync WAL ever, so setting it to zero is fine there too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we don't intend to sync the WAL on temp-stores, I wonder if not syncing can run into problems where the WAL dirty data forces the OS to flush. Hopefully we're deleting the WAL before that happens, though. Ok, I just talked myself into being ok with this.
bors r+ |
37172: c-deps: bump rocksdb for multiple backported PRs r=ajkr a=ajkr Includes the following changes, all of which have landed upstream. - cockroachdb/rocksdb#27: "ldb: set `total_order_seek` for scans" - cockroachdb/rocksdb#28: "Fix #3840: only `SyncClosedLogs` for multiple CFs" - cockroachdb/rocksdb#29: "Optionally wait on bytes_per_sync to smooth I/O" - cockroachdb/rocksdb#30: "Option string/map/file can set env from object registry" Also made the RocksDB changes that we decided in #34897: - Do not sync WAL before installing flush result. This is achieved by backporting cockroachdb/rocksdb#28; no configuration change is necessary. - Do not sync WAL ever for temp stores. This is achieved by setting `wal_bytes_per_sync = 0`. - Limit size of final syncs when generating SSTs. This is achieved by backporting cockroachdb/rocksdb#29 and turning it on with `strict_bytes_per_sync = true`. Release note: None Co-authored-by: Andrew Kryczka <andrew.kryczka2@gmail.com>
Build succeeded |
Includes the following changes, all of which have landed upstream.
total_order_seek
for scans rocksdb#27: "ldb: settotal_order_seek
for scans"SyncClosedLogs
for multiple CFs (#4460) rocksdb#28: "Fix Failed tests (11329): #3840: onlySyncClosedLogs
for multiple CFs"Also made the RocksDB changes that we decided in #34897:
SyncClosedLogs
for multiple CFs (#4460) rocksdb#28; no configuration change is necessary.wal_bytes_per_sync = 0
.strict_bytes_per_sync = true
.Release note: None