-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CRDB-2743] sql: create builltin generator crdb_internal.probe_range #79546
Conversation
One thing I am unsure of is in the case of a timeout whether the |
b7a896d
to
4da1cbc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Alex! I left some conceptual comments that I think will simplify things a bit, and will also address your question (by removing it).
Reviewed 5 of 5 files at r1, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @Santamaura, and @zachlite)
pkg/sql/sem/builtins/generator_builtins.go, line 478 at r1 (raw file):
makeGeneratorOverload( tree.ArgTypes{ {Name: "timeout", Typ: types.Int},
shouldn't this be something like types.IntervalDurationType
or types.Interval
?
pkg/sql/sem/builtins/generator_builtins.go, line 1749 at r1 (raw file):
// rangeKeyIterator is a ValueGenerator that iterates over all // SQL keys in a target range. // REFER TO HERE
?
pkg/sql/sem/builtins/generator_builtins.go, line 2436 at r1 (raw file):
if err != nil { return nil, pgerror.Newf( pgcode.InFailedSQLTransaction,
Is this really the right code?
pkg/sql/sem/builtins/generator_builtins.go, line 2465 at r1 (raw file):
prober: *kvProber, ranges: make(map[int64]int), remainingRanges: int64(numRanges),
This doesn't seem right. The range count changes out from under you through splits and merges, so any code that stores it like this is likely fishy.
pkg/sql/sem/builtins/generator_builtins.go, line 2481 at r1 (raw file):
// Next implements the tree.ValueGenerator interface. func (p *probeRangeGenerator) Next(ctx context.Context) (bool, error) { if p.remainingRanges == 0 {
Yeah this isn't going to be correct. If a range splits while the probe is underway, you'll end up stopping short. I'm not sure what will happen if a range merges but it doesn't really matter, we need to fix this.
I think the root problem is that this code wants to use the planner "as is". Instead, we should refactor the planner to make the interesting parts of it more reusable, and then reuse only those parts. We need only two things here: iterate through all range descriptors (i.e. discover all of the ranges in the system, with a consistent view) and then probe each of them.
Thinking this through a bit, there actually isn't a good reason to use the planner's ability to discover ranges, as it's too complex and prober-specific for what we need. The code we want is really this:
cockroach/pkg/sql/crdb_internal.go
Lines 3224 to 3227 in d4daa99
ranges, err := kvclient.ScanMetaKVs(ctx, p.txn, roachpb.Span{ | |
Key: keys.MinKey, | |
EndKey: keys.MaxKey, | |
}) |
This isn't streaming which isn't ideal, but it's also what is used by crdb_internal.ranges_no_leases
, so when we fix that we'll automatically fix it here. It is fine to use for now.
I'm also noticing that the internal_no_leases
generator is much more compact and doesn't seem to need to define a new type, so maybe you can adopt this pattern here and forfeit a bunch of boilerplate? I might be missing something.
As for the probing itself, we "just" need to export proberOpsImpl
:
cockroach/pkg/kv/kvprober/kvprober.go
Lines 160 to 186 in 18b95cb
// proberOpsImpl is used to probe the kv layer. | |
type proberOpsImpl struct { | |
} | |
// We attempt to commit a txn that reads some data at the key. | |
func (p *proberOpsImpl) Read(key interface{}) func(context.Context, *kv.Txn) error { | |
return func(ctx context.Context, txn *kv.Txn) error { | |
_, err := txn.Get(ctx, key) | |
return err | |
} | |
} | |
// We attempt to commit a txn that puts some data at the key then deletes | |
// it. The test of the write code paths is good: We get a raft command that | |
// goes thru consensus and is written to the pebble log. Importantly, no | |
// *live* data is left at the key, which simplifies the kvprober, as then | |
// there is no need to clean up data at the key post range split / merge. | |
// Note that MVCC tombstones may be left by the probe, but this is okay, as | |
// GC will clean it up. | |
func (p *proberOpsImpl) Write(key interface{}) func(context.Context, *kv.Txn) error { | |
return func(ctx context.Context, txn *kv.Txn) error { | |
if err := txn.Put(ctx, key, putValue); err != nil { | |
return err | |
} | |
return txn.Del(ctx, key) | |
} | |
} |
because that's what we want to invoke for the probe.
So in this PR, I would expect:
- get list of descs (just like
ranges_no_leases
) - for each desc, invoke
kvProbe{}.Write
(or read depending on options), with much of the garnish that you have here already, and produce a row
4da1cbc
to
46e5626
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comments did help thanks 👍
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @tbg, and @zachlite)
pkg/sql/sem/builtins/generator_builtins.go, line 478 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
shouldn't this be something like
types.IntervalDurationType
ortypes.Interval
?
Yeah makes more sense to be interval, changed.
pkg/sql/sem/builtins/generator_builtins.go, line 1749 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
?
A comment from when I was figuring out how builtin generators worked, removed.
pkg/sql/sem/builtins/generator_builtins.go, line 2436 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Is this really the right code?
The error code is irrelevant now, removed.
pkg/sql/sem/builtins/generator_builtins.go, line 2465 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
This doesn't seem right. The range count changes out from under you through splits and merges, so any code that stores it like this is likely fishy.
Right that makes sense.
pkg/sql/sem/builtins/generator_builtins.go, line 2481 at r1 (raw file):
I think the root problem is that this code wants to use the planner "as is". Instead, we should refactor the planner to make the interesting parts of it more reusable, and then reuse only those parts. We need only two things here: iterate through all range descriptors (i.e. discover all of the ranges in the system, with a consistent view) and then probe each of them.
Yeah I was iffy about using the current planner but wasn't sure if we should get the ranges how crdb_internal.ranges_no_leases
does it. I've refactored the generator to use this way and then also just use Read
or Write
now.
I'm also noticing that the internal_no_leases generator is much more compact and doesn't seem to need to define a new type, so maybe you can adopt this pattern here and forfeit a bunch of boilerplate? I might be missing something.
When I was looking over those, there didn't seem to be a way of passing arguments into generators created in that way, if it turns out they can have arguments passed I can move it over.
46e5626
to
f8ef75e
Compare
Thanks! I pushed a commit with some fixes and a bunch of TODO(alex)es which are basically my review comments. Please integrate the code changes into your commit (if they make sense to you), no need to preserve my authorship. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 5 of 5 files at r2, 2 of 2 files at r3.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @tbg, and @zachlite)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r4, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, and @zachlite)
2c70627
to
0ea1b2e
Compare
I think I covered everything from the TODOs, let me know if there's any outstanding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 7 of 7 files at r5, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @Santamaura, and @zachlite)
pkg/sql/sem/builtins/generator_probe_ranges.go, line 74 at r5 (raw file):
A write probe will effectively probe reads as well. `, // TODO(alex): add unit tests of this via a LogicTest. Make sure that the test doesn't verify anything that changes
Here's a TODO still. You have this test, though the "trace matches the string ..." part doesn't seem to be in it.
pkg/sql/sem/builtins/generator_probe_ranges.go, line 91 at r5 (raw file):
EnumData: &types.EnumMetadata{ LogicalRepresentations: enumMembers, PhysicalRepresentations: [][]byte{
I have no idea what's going on here, you may want to get a SQL person's review as well at least for these generator parts.
pkg/sql/sem/builtins/generator_probe_ranges.go, line 194 at r5 (raw file):
}() ops := &kvprober.ProberOps{}
nit: you can make this a field on probeRangeGenerator
. ProberOps
is a struct{}
so this line technically doesn't allocate but it saves future readers a double take.
pkg/sql/logictest/testdata/logic_test/generator_probe_ranges, line 17 at r5 (raw file):
query T SELECT substring('$trace_str', 33, 34)
What's this doing? We want to check for evidence of a trace event that comes out of KV which I am sure this isn't doing.
0ea1b2e
to
1521bed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @tbg, and @zachlite)
pkg/sql/sem/builtins/generator_probe_ranges.go, line 74 at r5 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Here's a TODO still. You have this test, though the "trace matches the string ..." part doesn't seem to be in it.
😄 I didn't realize we wanted to actually match 'proposing command'. Fixed.
pkg/sql/sem/builtins/generator_probe_ranges.go, line 91 at r5 (raw file):
Previously, tbg (Tobias Grieger) wrote…
I have no idea what's going on here, you may want to get a SQL person's review as well at least for these generator parts.
Good idea, if nobody reviews in some time I'll reach out to someone on SQL.
pkg/sql/sem/builtins/generator_probe_ranges.go, line 194 at r5 (raw file):
Previously, tbg (Tobias Grieger) wrote…
nit: you can make this a field on
probeRangeGenerator
.ProberOps
is astruct{}
so this line technically doesn't allocate but it saves future readers a double take.
Done.
pkg/sql/logictest/testdata/logic_test/generator_probe_ranges, line 17 at r5 (raw file):
Previously, tbg (Tobias Grieger) wrote…
What's this doing? We want to check for evidence of a trace event that comes out of KV which I am sure this isn't doing.
Changed so that we confirm the verbose trace includes 'proposing command'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you! Still think this is worth a glance from SQL but this may even happen after the merge IMO.
Reviewed 2 of 2 files at r6, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @joshimhoff, @koorosh, @Santamaura, and @zachlite)
pkg/sql/sem/builtins/generator_probe_ranges.go, line 74 at r5 (raw file):
Previously, Santamaura (Alex Santamaura) wrote…
😄 I didn't realize we wanted to actually match 'proposing command'. Fixed.
Wanna remove the TODO?
pkg/sql/sem/builtins/generator_probe_ranges.go, line 125 at r6 (raw file):
// The below are updated during calls to Next() throughout the lifecycle of // probeRangeGenerator. ops *kvprober.ProberOps
I was thinking ops kvprober.ProberOps
and then in makeProbeRangeGenerator
you can omit the field since the zero value is already good to go.
pkg/sql/logictest/testdata/logic_test/generator_probe_ranges, line 14 at r6 (raw file):
# test that the trace has a string matching `proposing command` to verify trace events # from the kvserver write path are received
Nit:
# Test that .... are received.
Previously there was difficulty diagnosing kv layer health when an incident occurs. This patch introduces a new virtual table crdb_internal.probe_ranges which utilitzes the kvprober probe each range to determine if the range can be reached or not. resolves cockroachdb#61695 Release note: None
1521bed
to
5a951cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @joshimhoff, @koorosh, @tbg, and @zachlite)
pkg/sql/sem/builtins/generator_probe_ranges.go, line 74 at r5 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Wanna remove the TODO?
Oops forgot. Removed.
pkg/sql/sem/builtins/generator_probe_ranges.go, line 125 at r6 (raw file):
Previously, tbg (Tobias Grieger) wrote…
I was thinking
ops kvprober.ProberOps
and then inmakeProbeRangeGenerator
you can omit the field since the zero value is already good to go.
👍 done.
pkg/sql/logictest/testdata/logic_test/generator_probe_ranges, line 14 at r6 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Nit:
# Test that .... are received.
Done.
bors r+ |
Build succeeded: |
I wonder if this tool would be available during a full cluster SQL outage such as the recent one involving a down |
It worked fine:
Here is the change I made locally to test:
I'm going to keep exploring this Q, but I'll hold off on posting anything, unless I learn something interesting. |
Maybe I'll write an integration test that makes all system ranges that we don't expect to be hard deps of |
With all ranges with range IDs >= 2 unavailable, connecting to an insecure five node cluster + querying
OTOH, if range with range ID = 1 is down also, we get an error. This makes sense, since probing requires an available meta2 range.
This is really awesome! Some Qs:
|
Tiny follow up: #81107 |
I tested this. I could connect as root with root client credentials. Even with all ranges down including meta2. Yay! And this comment confirms that this behavior is a design goal:
|
81107: sql: trace meta2 scans made by crdb_internal.probe_ranges r=joshimhoff a=joshimhoff **sql: trace meta2 scans made by crdb_internal.probe_ranges** With this commit, CRDB traces the meta2 scan made by crdb_internal.probe_ranges. If the scan fails, the trace is attached to the error returned to the SQL client. This way, outages involving a down meta2 range are observable. According to my testing, meta2 is the only system range that can make crdb_internal.probe_ranges unavailable. As a result, with this commit, we should get a trace of a query to the unavailable range, regardless of which range is unavailable, either in the SQL table output of crdb_internal.probe_ranges or in an error string. Release note. None. ------------------------ This is based on some thinking & testing I did at #79546 (comment). With this commit, we get: ``` root@:26257/defaultdb> select range_id, error from crdb_internal.probe_ranges(INTERVAL '1s', 'write'); ERROR: 0.000ms 0.000ms === operation:meta2scan _verbose:1 client:127.0.0.1:49588 node:1 user:root 0.010ms 0.010ms === operation:txn coordinator send _verbose:1 client:127.0.0.1:49588 node:1 txnID:be5903f5-1817-481b-8dcf-2e2bc8b0f052 user:root 0.020ms 0.010ms === operation:dist sender send _verbose:1 client:127.0.0.1:49588 node:1 txn:be5903f5 user:root 0.063ms 0.043ms event:kv/kvclient/kvcoord/range_iter.go:175 [n1,client=127.0.0.1:49588,user=root,txn=be5903f5] querying next range at /Meta2/"\x00" 0.122ms 0.059ms event:kv/kvclient/kvcoord/dist_sender.go:2032 [n1,client=127.0.0.1:49588,user=root,txn=be5903f5] r1: sending batch 1 Scan to (n1,s1):1 0.132ms 0.010ms event:rpc/nodedialer/nodedialer.go:161 [n1,client=127.0.0.1:49588,user=root,txn=be5903f5] sending request to local client 0.136ms 0.004ms === operation:/cockroach.roachpb.Internal/Batch _verbose:1 node:1 span.kind:server 0.144ms 0.008ms event:server/node.go:996 [n1] node received request: 1 Scan 0.174ms 0.030ms event:kv/kvserver/store_send.go:157 [n1,s1] executing Scan [/Meta2/"\x00",/Meta2/Max), [txn: be5903f5], [can-forward-ts] 0.335ms 0.161ms event:server/node.go:1009 [n1] error from stores.Send: boom: "sql txn" meta={id=be5903f5 key=/Min pri=0.02292157 epo=0 ts=1651868092.726275000,0 min=1651868092.726275000,0 seq=0} lock=false stat=PENDING rts=1651868092.726275000,0 wto=false gul=1651868093.226275000,0 0.384ms 0.252ms event:kv/kvclient/kvcoord/dist_sender.go:2144 [n1,client=127.0.0.1:49588,user=root,txn=be5903f5] boom: "sql txn" meta={id=be5903f5 key=/Min pri=0.02292157 epo=0 ts=1651868092.726275000,0 min=1651868092.726275000,0 seq=0} lock=false stat=PENDING rts=1651868092.726275000,0 wto=false gul=1651868093.226275000,0 0.442ms 0.058ms event:kv/kvclient/kvcoord/dist_sender.go:1674 [n1,client=127.0.0.1:49588,user=root,txn=be5903f5] reply error Scan [/Meta2/"\x00",/Meta2/Max), [txn: be5903f5], [can-forward-ts]: boom: "sql txn" meta={id=be5903f5 key=/Min pri=0.02292157 epo=0 ts=1651868092.726275000,0 min=1651868092.726275000,0 seq=0} lock=false stat=PENDING rts=1651868092.726275000,0 wto=false gul=1651868093.226275000,0 : boom ``` If run a test like this: ``` ~/g/s/g/c/cockroach [tweak_crdb_internal_probe] $ git diff diff --git a/pkg/kv/kvserver/replica_send.go b/pkg/kv/kvserver/replica_send.go index 3295886e91..ff7a1ed4df 100644 --- a/pkg/kv/kvserver/replica_send.go +++ b/pkg/kv/kvserver/replica_send.go `@@` -12,7 +12,10 `@@` package kvserver +var sigCh = func() chan os.Signal { + ch := make(chan os.Signal, 1) + signal.Notify(ch, syscall.SIGHUP) + return ch +}() + +var weDead atomic.Int32 + // sendWithoutRangeID used to be called sendWithRangeID, accepted a `_forStacks // roachpb.RangeID` argument, and had the description below. Ever since Go // switched to the register-based calling convention though, this stopped `@@` -126,6 +138,16 `@@` func (r *Replica) sendWithoutRangeID( ) (_ *roachpb.BatchResponse, rErr *roachpb.Error) { var br *roachpb.BatchResponse + select { + case <-sigCh: + log.Warningf(ctx, "time to drop incoming raft messages for range") + weDead.Store(1) + default: + } + if weDead.Load() == 1 && ba.RangeID >= 1 { + return nil, roachpb.NewError(errors.New("boom")) + } + if r.leaseholderStats != nil && ba.Header.GatewayNodeID != 0 { r.leaseholderStats.RecordCount(r.getBatchRequestQPS(ctx, ba), ba.Header.GatewayNodeID) } ``` Without this commit, we just get: ``` root@:26257/?> select range_id, error from crdb_internal.probe_ranges(INTERVAL '1s', 'write'); ERROR: boom ``` Co-authored-by: Josh Imhoff <josh@cockroachlabs.com>
@Santamaura Is it possible to backport this function to v22.1? We would love to use it sooner than later! |
Sure, backport PR has been opened. |
Out of curiosity, could I have done this myself? Are there docs I should have read first? |
I made the backport via our backport tool. Doc on backports is here: https://cockroachlabs.atlassian.net/wiki/spaces/CRDB/pages/900005932/Backporting+a+change+to+a+release+branch So for this particular PR possibly (since there were no complicated merge conflicts) for large PRs it's usually better for the author to do it since they have better context. |
Assuming not difficult, can we also backport to 21.2? Thanks for this great tool. |
Sorry @joshimhoff was working on something and completely forgot about the 21.2 backport request, I've opened one now. |
Thanks! Appreciate it. |
…107752 #107802 #107803 106508: util/must: add runtime assertion API r=erikgrinaker a=erikgrinaker For details and usage examples, see the [package documentation](https://github.com/erikgrinaker/cockroach/blob/must/pkg/util/must/must.go). --- This patch adds a convenient and canonical API for runtime assertions, inspired by the Testify package used for Go test assertions. It is intended to encourage liberal use of runtime assertions throughout the code base, by making it as easy as possible to write assertions that follow best practices. It does not attempt to reinvent the wheel, but instead builds on existing infrastructure. Assertion failures are fatal in all non-release builds, including roachprod clusters and roachtests, to ensure they will be noticed. In release builds, they instead log the failure and report it to Sentry (if enabled), and return an assertion error to the caller for propagation. This avoids excessive disruption in production environments, where an assertion failure is often scoped to an individual RPC request, transaction, or range, and crashing the node can turn a minor problem into a full-blown outage. It is still possible to kill the node when appropriate via `log.Fatalf`, but this should be the exception rather than the norm. It also supports expensive assertions that must be compiled out of normal dev/test/release builds for performance reasons. These are instead enabled in special test builds. This is intended to be used instead of other existing assertion mechanisms, which have various shortcomings: * `log.Fatalf`: kills the node even in release builds, which can cause severe disruption over often minor issues. * `errors.AssertionFailedf`: only suitable when we have an error return path, does not fatal in non-release builds, and are not always notified in release builds. * `logcrash.ReportOrPanic`: panics rather than fatals, which can leave the node limping along. Requires the caller to implement separate assertion handling in release builds, which is easy to forget. Also requires propagating cluster settings, which aren't always available. * `buildutil.CrdbTestBuild`: only enabled in Go tests, not roachtests, roachprod clusters, or production clusters. * `util.RaceEnabled`: only enabled in race builds. Expensive assertions should be possible to run without the additional overhead of the race detector. For more details and examples, see the `must` package documentation. Resolves #94986. Epic: none Release note: None 107094: streamingest: unskip TestTenantStreamingUnavailableStreamAddress r=lidorcarmel a=lidorcarmel Changing a few things to get this test to pass under stress: - use 50 ranges instead of 10, because there are already 50-ish system ranges, so if we write only 10 more ranges those might not get distributed on all servers. - avoid reading from the source cluster after stopping a node, it's flaky, see #107499 for more info. Epic: none Fixes: #107023 Fixes: #106865 Release note: None 107717: server/profiler: remove `server.cpu_profile.enabled` setting r=xinhaoz a=xinhaoz Cpu profiling can be enabled by setting the cluster setting `server.cpu_profile.cpu_usage_combined_threshold`. This makes `server.cpu_profile.enabled` redundant and makes it more difficult and confusing to enable cpu profiling. This commit removes the `server.cpu_profile.enabled` setting entirely. Note that both jdefault values for the cluster settings set profiling off. Closes: #102024 Release note (sql change): The cluster setting `server.cpu_profile.enabled` has been removed. `server.cpu_profile.cpu_usage_combined_threshold` can enable and disable cpu profiling. 107720: cli: add probe_range in debug.zip r=j82w a=j82w PR #79546 introduces `crdb_internal.probe_range`. This PR adds the `crdb_internal.probe_range` to the debug.zip. The LIMIT gives a very approximately ~1000ms*100 target on how long this can take, so that running debug.zip against an unavailable cluster won't take too long. closes: #80360 Release note (cli change): The debug.zip now includes the `crdb_internal.probe_range` table with a limit of 100 rows to avoid the query from taking to long. 107727: server: deflake TestServerShutdownReleasesSession r=rafiss a=rafiss The tenant was not being fully stopped, so the test could encounter flakes. fixes #107592 Release note: None 107742: ui: show txn fingerprint details page with unspecified app r=xinhaoz a=xinhaoz Previously, when the app was not specified in the url search params for the txn details fingerprint page, the page would fail to load. This commit allows the page to load when there is no app specified but a fingerprint id that matches the requested page in the payload. The first matching fingerprint id is loaded. Additionally, the TransactionDetailsLink will not include the appNames search param unless the provided prop is non-nullish. Fixes: #107731 Release note (bug fix): Txn fingerprint details page in the console UI should load with the fingerprint details even if no app is specified in the URL. Demo: https://www.loom.com/share/810308d3dcd74ca888c42287ebafaecf 107745: kvserver: fix test merge queue when grunning unsupported r=irfansharif a=kvoli `TestMergeQueue/load-based-merging/switch...below-threshold` asserts that switching the split objective between CPU and QPS will not cause ranges to merge, even if their pre-switch load qualified them for merging. This test was broken when `grunning` was unsupported, as the objective never actually switches to anything other than QPS. Add a check for `grunning` support, and assert that a merge occurs if unsupported. Fixes: #106937 Epic: none Release note: None 107749: opt: add enable_durable_locking_for_serializable session variable r=DrewKimball,nvanbenschoten a=michae2 Follow-up from #105857 This commit ammends 6a3e43d to add a session variable to control whether guaranteed-durable locks are used under serializable isolation. Informs: #100194 Epic: CRDB-25322 Release note (sql change): Add a new session variable, `enable_durable_locking_for_serializable`, which controls locking durability under serializable isolation. With this set to true, SELECT FOR UPDATE locks, SELECT FOR SHARED locks, and constraint check locks (e.g. locks acquired during foreign key checks if `enable_implicit_fk_locking_for_serializable` is set to true) will be guaranteed-durable under serializable isolation, meaning they will always be held to transaction commit. (These locks are always guaranteed-durable under weaker isolation levels.) By default, under serializable isolation these locks are best-effort rather than guaranteed-durable, meaning in some cases (e.g. leaseholder transfer, node loss, etc.) they could be released before transaction commit. Serializable isolation does not rely on locking for correctness, only using it to improve performance under contention, so this default is a deliberate choice to avoid the performance overhead of lock replication. 107752: changefeedccl: prevent deadlock in TestChangefeedKafkaMessageTooLarge r=miretskiy a=jayshrivastava Previously, this test would deadlock due to kafka retrying messages too many times. These messages are stored in a buffer of size 1024 created by the CDC testing infra: https://github.com/cockroachdb/cockroach/blob/5c3f96d38cdc3a2d953ca3ffb1e39e97d7e5110e/pkg/ccl/changefeedccl/testfeed_test.go#L1819 The test asserts that 2000 messages pass through the buffer. When the test finishes, it stops reading from the buffer. The problem is that due to retries, there may be more messages sent to the buffer than that are read out of the buffer. Even after the 2000 messages are read and the test is shutting down, the sink may be blocked trying to put resolved messages (plus retries) in the buffer. If this happens, the changefeed resumer (same goroutine as the kafka sink) gets blocked and does not terminate when the job is cancelled at the end of the test. This change caps the number of retries at 200 for this test, so there should be no more than 200 extra messages plus a few resolved messages during this test. This is far less than the buffer size of 1024. See detailed explanation in #107591. Fixes: #107591 Epic: none Release note: None 107802: teamcity-trigger: don't start a job for an empty target r=healthy-pod a=rickystewart This makes no sense, so skip these cases. Closes: #107779 Closes: #107780 Closes: #107781 Epic: none Release note: None 107803: githubpost: set `map` field if `null` r=healthy-pod a=rickystewart Go is a really good language. Informs: #107779 Epic: none Release note: None Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com> Co-authored-by: Lidor Carmel <lidor@cockroachlabs.com> Co-authored-by: Xin Hao Zhang <xzhang@cockroachlabs.com> Co-authored-by: j82w <jwilley@cockroachlabs.com> Co-authored-by: Rafi Shamim <rafi@cockroachlabs.com> Co-authored-by: Austen McClernon <austen@cockroachlabs.com> Co-authored-by: Michael Erickson <michae2@cockroachlabs.com> Co-authored-by: Jayant Shrivastava <jayants@cockroachlabs.com> Co-authored-by: Ricky Stewart <ricky@cockroachlabs.com>
Previously there was difficulty in diagnosing kv layer health when
an incident occurs. This patch introduces the new virtual table
crdb_internal.probe_range which utilizes the kvprober to probe
each range to determine if the range can be reached or not.
resolves #61695
Release note: None