-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
db: keep up to one memtable for recycling #2772
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 files reviewed, 2 unresolved discussions (waiting on @jbowens)
-- commits
line 21 at r1:
why is this not showing improvement in allocation? Is this because of cgo, so these allocations are being hidden from the go runtime?
db.go
line 2229 at r1 (raw file):
*recycleBuf = mem.arenaBuf if unusedBuf := d.memTableRecycle.Swap(recycleBuf); unusedBuf != nil { // There was already a memtable waiting to be recycled. We're not
s/not/now/
722f948
to
d7f0ff9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this to recycle the entire memTable
struct. This helps also reduce contention on the block cache, since we don't need to Reserve
memory from all the block cache shards on every memtable rotation.
Reviewable status: 0 of 5 files reviewed, 2 unresolved discussions (waiting on @sumeerbhola)
Previously, sumeerbhola wrote…
why is this not showing improvement in allocation? Is this because of cgo, so these allocations are being hidden from the go runtime?
Yeah, that's right; the allocation reduction is only in cgo allocations.
db.go
line 2229 at r1 (raw file):
Previously, sumeerbhola wrote…
s/not/now/
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 5 of 5 files at r2, all commit messages.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @bananabrick and @jbowens)
db.go
line 2244 at r2 (raw file):
func (d *DB) freeMemTable(m *memTable) { d.memTableCount.Add(-1)
Can you update the code commentary around the ZombieCount
and ZombieSize
metrics to clarify that they include up to 1 memtable that is saved for reducing allocations.
I don't think we export CRDB metrics for these.
We've observed large allocations like the 64MB memtable allocation take 10ms+. This can add latency to the WAL/memtable rotation critical section during which the entire commit pipeline is stalled, contributing to batch commit tail latencies. This commit adapts the memtable lifecycle to keep the most recent obsolete memtable around for use as the next mutable memtable. This reduces the commit latency hiccup during a memtable rotation, and it also reduces block cache mutex contention (cockroachdb#1997) by reducing the number of times we must reserve memory from the block cache. ``` goos: linux goarch: amd64 pkg: github.com/cockroachdb/pebble cpu: Intel(R) Xeon(R) CPU @ 2.30GHz │ old.txt │ new.txt │ │ sec/op │ sec/op vs base │ RotateMemtables-24 120.7µ ± 2% 102.8µ ± 4% -14.85% (p=0.000 n=25) │ old.txt │ new.txt │ │ B/op │ B/op vs base │ RotateMemtables-24 124.3Ki ± 0% 124.0Ki ± 0% -0.27% (p=0.000 n=25) │ old.txt │ new.txt │ │ allocs/op │ allocs/op vs base │ RotateMemtables-24 114.0 ± 0% 111.0 ± 0% -2.63% (p=0.000 n=25) ``` Informs cockroachdb#2646.
a294759
to
87f0a80
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TFTR!
Reviewable status: 3 of 6 files reviewed, 3 unresolved discussions (waiting on @bananabrick and @sumeerbhola)
db.go
line 2244 at r2 (raw file):
Previously, sumeerbhola wrote…
Can you update the code commentary around the
ZombieCount
andZombieSize
metrics to clarify that they include up to 1 memtable that is saved for reducing allocations.I don't think we export CRDB metrics for these.
Done.
We've observed large allocations like the 64MB memtable allocation take 10ms+.
This can add latency to the WAL/memtable rotation critical section during which
the entire commit pipeline is stalled, contributing to batch commit tail
latencies. This commit adapts the memtable lifecycle to keep the most recent
obsolete memtable around for use as the next mutable memtable.
This reduces the commit latency hiccup during a memtable rotation, and it also
reduces block cache mutex contention (#1997) by reducing the number of times we
must reserve memory from the block cache.
Informs #2646.