Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
71542: backupccl: Support RESTORE SYSTEM USERS from a backup r=gh-casper a=gh-casper

Support a new variant of RESTORE that recreates system users that don't exist in current cluster from a backup that contains system.users and also grant roles for these users. Example invocation: RESTORE SYSTEM USERS FROM 'nodelocal://foo/1';

Similar with full cluster restore, we firstly restore a temp system database which contains system.users and system.role_members into the restoring cluster and insert users and roles into the current system table from the temp system table.

Fixes: #45358

Release note (sql change): A special flavor of RESTORE, RESTORE SYSTEM USERS FROM ..., is added to support restoring system users from a backup. When executed, the statement recreates those users which are in a backup of system.users but do not currently exist (ignoring those who do) and re-grant roles for users if the backup contains system.role_members.

73319: jobs: Execute scheduled jobs on a single node in the cluster. r=miretskiy a=miretskiy

Execute scheduled jobs daemon on a single node -- namely, the lease
holder for meta1 range lease holder.

Prior to this change, scheduling daemon was running on each node,
polling scheduled jobs table periodically with a `FOR UPDATE` clause.
Unfortunately, job planning phase (namely, the backup planning phase) could
take significant amount of time.  In such situation, the entirety
of the scheduled jobs table would be locked, resulting in inability
to introspect the state of schedules (or jobs) via `SHOW SCHEDULES` or similar
statements.

Furthermore, dropping `FOR UPDATE` clause by itself is not ideal because
that would lead to expensive backup planning being executed on almost every
node, with all but 1 node making progress.

The single node mode is disabled by default, but can be enabled
via a `jobs.scheduler.single_node_scheduler.enabled` setting.

Release Notes: scheduled jobs scheduler now runs on a single node by default
in order to reduce contention on scheduled jobs table.

74077: kvserver: lease transfer in JOINT configuration r=shralex a=shralex

Previously:
1. Removing a leaseholder was not allowed.
2. A VOTER_INCOMING node wasn't able to accept the lease.

Because of (1), users needed to transfer the lease before removing
the leaseholder. Because of (2), when relocating a range from the
leaseholder A to a new node B, there was no possibility to transfer
the lease to B before it was fully added as VOTER. Adding it as a
voter, however, could degrade fault tolerance. For example, if A
and B are in region R1, C in region R2 and D in R3, and we had
(A, C, D), and now adding B to the cluster to replace A results in
the intermediate configuration (A, B, C, D) the failure of R1 would
make the cluster unavailable since no quorum can be established.
Since B can't be added before A is removed, the system would
transfer the lease out to C, remove A and add B, and then transfer
the lease again to B. This resulted a temporary migration of leases
out of their preferred region, imbalance of lease count and degraded
performance.

The PR fixes this, by (1) allowing removing the leaseholder, and
transferring the lease right before we exit the JOINT config. And (2),
allowing a VOTER_INCOMING to accept the lease.

Release note (performance improvement): Fixes a limitation which meant 
that, upon adding a new node to the cluster, lease counts among existing
nodes could diverge until the new node was fully upreplicated.

Here are a few experiments that demonstrate the benefit of the feature.
1. 
> roachprod create local -n 4 // if not already created and staged
> roachprod put local cockroach
> roachprod start local:1-3 --racks=3 // add 3 servers in 3 different racks
> cockroach workload init kv --splits=10000
> roachprod start local:4 --racks=3 // add a 4th server in one of the racks

Without the change (master):
<img width="978" alt="Screen Shot 2022-02-09 at 8 35 35 AM" src="https://user-images.githubusercontent.com/6037719/153458966-609dbb7e-ca3d-4db6-9cfb-adc228f2bdf2.png">

With the change:
<img width="986" alt="Screen Shot 2022-02-08 at 8 46 41 PM" src="https://user-images.githubusercontent.com/6037719/153459366-2d4e2def-37cf-405b-b601-8be57419ae02.png">

We can see that without the patch the number of leases on server 0 (black line) goes all the way to 0 before it goes back up and that the number of leases in other racks goes up, both undesirable. With the patch both things are no longer happening.

2. Same as 1, but with a leaseholder preference of rack 0:

ALTER RANGE default CONFIGURE ZONE USING lease_preferences='[[+rack=0]]';

Without the change (master):
<img width="966" alt="Screen Shot 2022-02-09 at 10 45 27 PM" src="https://user-images.githubusercontent.com/6037719/153460753-bce048f0-f6da-4e21-afdc-317620c035b2.png">

With the change:
<img width="983" alt="leaseholder preferences - with change" src="https://user-images.githubusercontent.com/6037719/153460780-55795866-cf47-404d-b77a-45d9e011f972.png">

We can see that without the change the number of leaseholders in racks 1 and 2 together (not in preferred region) grows from 300 to 1000, then goes back to 40. With the fix it doesn’t grow at all.






76401: pgwire: add server.max_connections public cluster setting r=rafiss a=ecwall

This setting specifies a maximum number of connections that a server can have open at any given time.
<0 - Connections are unlimited (existing behavior)
=0 - Connections are disabled
>0 - Connections are limited
If a new non-superuser connection would exceed this limit, the same error message is
returned as postgres: "sorry, too many connections" with the 53300 error code
that corresponds to "too many connections".

Release note (ops change): An off-by-default server.max_connections cluster
setting has been added to limit the maximum number of connections to a server.

76748: sql: add missing specs to plan diagrams r=rharding6373 a=rharding6373

This change allows missing specs (e.g., RestoreDataSpec and
SplitAndScatterSpec) to be shown in plan diagrams. Before this change a
plan involving these types would result in an error generating the
diagrams. Also added a test to make sure future specs implement the
`diagramCellType` interface, which is required to generate diagrams.

Release note: None


Co-authored-by: Casper <casper@cockroachlabs.com>
Co-authored-by: Yevgeniy Miretskiy <yevgeniy@cockroachlabs.com>
Co-authored-by: shralex <shralex@gmail.com>
Co-authored-by: Evan Wall <wall@cockroachlabs.com>
Co-authored-by: rharding6373 <rharding6373@users.noreply.github.com>
  • Loading branch information
6 people committed Feb 18, 2022
6 parents 9cb7e3e + 6f695f3 + 0e2461e + 208e2b4 + 020cf4a + 607034a commit 255c1fb
Show file tree
Hide file tree
Showing 46 changed files with 1,285 additions and 305 deletions.
3 changes: 2 additions & 1 deletion docs/generated/settings/settings-for-tenants.txt
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ server.eventlog.enabled boolean true if set, logged notable events are also stor
server.eventlog.ttl duration 2160h0m0s if nonzero, entries in system.eventlog older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.
server.host_based_authentication.configuration string host-based authentication configuration to use during connection authentication
server.identity_map.configuration string system-identity to database-username mappings
server.max_connections_per_gateway integer -1 the maximum number of non-superuser SQL connections per gateway allowed at a given time (note: this will only limit future connection attempts and will not affect already established connections). Negative values result in unlimited number of connections. Superusers are not affected by this limit.
server.oidc_authentication.autologin boolean false if true, logged-out visitors to the DB Console will be automatically redirected to the OIDC login endpoint
server.oidc_authentication.button_text string Login with your OIDC provider text to show on button on DB Console login page to login with your OIDC provider (only shown if OIDC is enabled)
server.oidc_authentication.claim_json_key string sets JSON key of principal to extract from payload after OIDC authentication completes (usually email or sid)
Expand Down Expand Up @@ -179,4 +180,4 @@ trace.debug.enable boolean false if set, traces for recent requests can be seen
trace.jaeger.agent string the address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as <host>:<port>. If no port is specified, 6381 will be used.
trace.opentelemetry.collector string address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.
trace.zipkin.collector string the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.
version version 21.2-68 set the active cluster version in the format '<major>.<minor>'
version version 21.2-70 set the active cluster version in the format '<major>.<minor>'
3 changes: 2 additions & 1 deletion docs/generated/settings/settings.html
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@
<tr><td><code>server.eventlog.ttl</code></td><td>duration</td><td><code>2160h0m0s</code></td><td>if nonzero, entries in system.eventlog older than this duration are deleted every 10m0s. Should not be lowered below 24 hours.</td></tr>
<tr><td><code>server.host_based_authentication.configuration</code></td><td>string</td><td><code></code></td><td>host-based authentication configuration to use during connection authentication</td></tr>
<tr><td><code>server.identity_map.configuration</code></td><td>string</td><td><code></code></td><td>system-identity to database-username mappings</td></tr>
<tr><td><code>server.max_connections_per_gateway</code></td><td>integer</td><td><code>-1</code></td><td>the maximum number of non-superuser SQL connections per gateway allowed at a given time (note: this will only limit future connection attempts and will not affect already established connections). Negative values result in unlimited number of connections. Superusers are not affected by this limit.</td></tr>
<tr><td><code>server.oidc_authentication.autologin</code></td><td>boolean</td><td><code>false</code></td><td>if true, logged-out visitors to the DB Console will be automatically redirected to the OIDC login endpoint</td></tr>
<tr><td><code>server.oidc_authentication.button_text</code></td><td>string</td><td><code>Login with your OIDC provider</code></td><td>text to show on button on DB Console login page to login with your OIDC provider (only shown if OIDC is enabled)</td></tr>
<tr><td><code>server.oidc_authentication.claim_json_key</code></td><td>string</td><td><code></code></td><td>sets JSON key of principal to extract from payload after OIDC authentication completes (usually email or sid)</td></tr>
Expand Down Expand Up @@ -192,6 +193,6 @@
<tr><td><code>trace.jaeger.agent</code></td><td>string</td><td><code></code></td><td>the address of a Jaeger agent to receive traces using the Jaeger UDP Thrift protocol, as <host>:<port>. If no port is specified, 6381 will be used.</td></tr>
<tr><td><code>trace.opentelemetry.collector</code></td><td>string</td><td><code></code></td><td>address of an OpenTelemetry trace collector to receive traces using the otel gRPC protocol, as <host>:<port>. If no port is specified, 4317 will be used.</td></tr>
<tr><td><code>trace.zipkin.collector</code></td><td>string</td><td><code></code></td><td>the address of a Zipkin instance to receive traces, as <host>:<port>. If no port is specified, 9411 will be used.</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>21.2-68</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
<tr><td><code>version</code></td><td>version</td><td><code>21.2-70</code></td><td>set the active cluster version in the format '<major>.<minor>'</td></tr>
</tbody>
</table>
12 changes: 12 additions & 0 deletions docs/generated/sql/bnf/restore.bnf
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,15 @@ restore_stmt ::=
| 'RESTORE' ( 'TABLE' table_pattern ( ( ',' table_pattern ) )* | 'DATABASE' database_name ( ( ',' database_name ) )* ) 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' restore_options_list
| 'RESTORE' ( 'TABLE' table_pattern ( ( ',' table_pattern ) )* | 'DATABASE' database_name ( ( ',' database_name ) )* ) 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' 'OPTIONS' '(' restore_options_list ')'
| 'RESTORE' ( 'TABLE' table_pattern ( ( ',' table_pattern ) )* | 'DATABASE' database_name ( ( ',' database_name ) )* ) 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' )
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp 'WITH' restore_options_list
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp 'WITH' 'OPTIONS' '(' restore_options_list ')'
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' restore_options_list
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' 'OPTIONS' '(' restore_options_list ')'
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' )
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp 'WITH' restore_options_list
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp 'WITH' 'OPTIONS' '(' restore_options_list ')'
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'AS' 'OF' 'SYSTEM' 'TIME' timestamp
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' restore_options_list
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' ) 'WITH' 'OPTIONS' '(' restore_options_list ')'
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' subdirectory 'IN' ( destination | '(' partitioned_backup_location ( ',' partitioned_backup_location )* ')' )
2 changes: 2 additions & 0 deletions docs/generated/sql/bnf/stmt_block.bnf
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,8 @@ restore_stmt ::=
| 'RESTORE' 'FROM' string_or_placeholder 'IN' list_of_string_or_placeholder_opt_list opt_as_of_clause opt_with_restore_options
| 'RESTORE' targets 'FROM' list_of_string_or_placeholder_opt_list opt_as_of_clause opt_with_restore_options
| 'RESTORE' targets 'FROM' string_or_placeholder 'IN' list_of_string_or_placeholder_opt_list opt_as_of_clause opt_with_restore_options
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' list_of_string_or_placeholder_opt_list opt_as_of_clause opt_with_restore_options
| 'RESTORE' 'SYSTEM' 'USERS' 'FROM' string_or_placeholder 'IN' list_of_string_or_placeholder_opt_list opt_as_of_clause opt_with_restore_options
| 'RESTORE' targets 'FROM' 'REPLICATION' 'STREAM' 'FROM' string_or_placeholder_opt_list opt_as_of_clause

resume_stmt ::=
Expand Down
88 changes: 88 additions & 0 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9476,3 +9476,91 @@ func TestExportRequestBelowGCThresholdOnDataExcludedFromBackup(t *testing.T) {
_, err = conn.Exec(fmt.Sprintf("BACKUP TABLE foo TO $1 AS OF SYSTEM TIME '%s'", tsBefore), localFoo)
require.NoError(t, err)
}

// TestBackupRestoreSystemUsers tests RESTORE SYSTEM USERS feature which allows user to
// restore users from a backup into current cluster and regrant roles.
func TestBackupRestoreSystemUsers(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

sqlDB, tempDir, cleanupFn := createEmptyCluster(t, singleNode)
_, sqlDBRestore, cleanupEmptyCluster := backupRestoreTestSetupEmpty(t, singleNode, tempDir, InitManualReplication, base.TestClusterArgs{})
defer cleanupFn()
defer cleanupEmptyCluster()

sqlDB.Exec(t, `CREATE USER app; CREATE USER test`)
sqlDB.Exec(t, `CREATE ROLE app_role; CREATE ROLE test_role`)
sqlDB.Exec(t, `GRANT app_role TO test_role;`) // 'test_role' is a member of 'app_role'
sqlDB.Exec(t, `GRANT admin, app_role TO app; GRANT test_role TO test`)
sqlDB.Exec(t, `CREATE DATABASE db; CREATE TABLE db.foo (ind INT)`)
sqlDB.Exec(t, `BACKUP TO $1`, localFoo+"/1")
sqlDB.Exec(t, `BACKUP DATABASE db TO $1`, localFoo+"/2")
sqlDB.Exec(t, `BACKUP TABLE system.users TO $1`, localFoo+"/3")

// User 'test' exists in both clusters but 'app' only exists in the backup
sqlDBRestore.Exec(t, `CREATE USER test`)
sqlDBRestore.Exec(t, `CREATE DATABASE db`)
// Create multiple databases to make max descriptor ID be larger than max descriptor ID
// in the backup to test if we correctly generate new descriptor IDs
sqlDBRestore.Exec(t, `CREATE DATABASE db1; CREATE DATABASE db2; CREATE DATABASE db3`)

t.Run("system users", func(t *testing.T) {
sqlDBRestore.Exec(t, "RESTORE SYSTEM USERS FROM $1", localFoo+"/1")

// Role 'app_role' and user 'app' will be added, and 'app' is granted with 'app_role'
// User test will remain untouched with no role granted
sqlDBRestore.CheckQueryResults(t, "SELECT * FROM system.users", [][]string{
{"admin", "", "true"},
{"app", "NULL", "false"},
{"app_role", "NULL", "true"},
{"root", "", "false"},
{"test", "NULL", "false"},
{"test_role", "NULL", "true"},
})
sqlDBRestore.CheckQueryResults(t, "SELECT * FROM system.role_members", [][]string{
{"admin", "app", "false"},
{"admin", "root", "true"},
{"app_role", "app", "false"},
{"app_role", "test_role", "false"},
})
sqlDBRestore.CheckQueryResults(t, "SHOW USERS", [][]string{
{"admin", "", "{}"},
{"app", "", "{admin,app_role}"},
{"app_role", "", "{}"},
{"root", "", "{admin}"},
{"test", "", "{}"},
{"test_role", "", "{app_role}"},
})
})

t.Run("restore-from-backup-with-no-system-users", func(t *testing.T) {
sqlDBRestore.ExpectErr(t, "cannot restore system users as no system.users table in the backup",
"RESTORE SYSTEM USERS FROM $1", localFoo+"/2")
})

_, sqlDBRestore1, cleanupEmptyCluster1 := backupRestoreTestSetupEmpty(t, singleNode, tempDir, InitManualReplication, base.TestClusterArgs{})
defer cleanupEmptyCluster1()
t.Run("restore-from-backup-with-no-system-role-members", func(t *testing.T) {
sqlDBRestore1.Exec(t, "RESTORE SYSTEM USERS FROM $1", localFoo+"/3")

sqlDBRestore1.CheckQueryResults(t, "SELECT * FROM system.users", [][]string{
{"admin", "", "true"},
{"app", "NULL", "false"},
{"app_role", "NULL", "true"},
{"root", "", "false"},
{"test", "NULL", "false"},
{"test_role", "NULL", "true"},
})
sqlDBRestore1.CheckQueryResults(t, "SELECT * FROM system.role_members", [][]string{
{"admin", "root", "true"},
})
sqlDBRestore1.CheckQueryResults(t, "SHOW USERS", [][]string{
{"admin", "", "{}"},
{"app", "", "{}"},
{"app_role", "", "{}"},
{"root", "", "{admin}"},
{"test", "", "{}"},
{"test_role", "", "{}"},
})
})
}
64 changes: 63 additions & 1 deletion pkg/ccl/backupccl/restore_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -1629,6 +1629,15 @@ func (r *restoreResumer) doResume(ctx context.Context, execCtx interface{}) erro
// Reload the details as we may have updated the job.
details = r.job.Details().(jobspb.RestoreDetails)

if err := r.cleanupTempSystemTables(ctx, nil /* txn */); err != nil {
return err
}
} else if details.RestoreSystemUsers {
if err := r.restoreSystemUsers(ctx, p.ExecCfg().DB, mainData.systemTables); err != nil {
return err
}
details = r.job.Details().(jobspb.RestoreDetails)

if err := r.cleanupTempSystemTables(ctx, nil /* txn */); err != nil {
return err
}
Expand Down Expand Up @@ -1786,7 +1795,7 @@ func (r *restoreResumer) notifyStatsRefresherOfNewTables() {
// This is the last of the IDs pre-allocated by the restore planner.
// TODO(postamar): Store it directly in the details instead? This is brittle.
func tempSystemDatabaseID(details jobspb.RestoreDetails) descpb.ID {
if details.DescriptorCoverage != tree.AllDescriptors {
if details.DescriptorCoverage != tree.AllDescriptors && !details.RestoreSystemUsers {
return descpb.InvalidID
}
var maxPreAllocatedID descpb.ID
Expand Down Expand Up @@ -2555,6 +2564,59 @@ type systemTableNameWithConfig struct {
config systemBackupConfiguration
}

// Restore system.users from the backup into the restoring cluster. Only recreate users
// which are in a backup of system.users but do not currently exist (ignoring those who do)
// and re-grant roles for users if the backup has system.role_members.
func (r *restoreResumer) restoreSystemUsers(
ctx context.Context, db *kv.DB, systemTables []catalog.TableDescriptor,
) error {
executor := r.execCfg.InternalExecutor
return db.Txn(ctx, func(ctx context.Context, txn *kv.Txn) error {
selectNonExistentUsers := "SELECT * FROM crdb_temp_system.users temp " +
"WHERE NOT EXISTS (SELECT * FROM system.users u WHERE temp.username = u.username)"
users, err := executor.QueryBuffered(ctx, "get-users",
txn, selectNonExistentUsers)
if err != nil {
return err
}

insertUser := `INSERT INTO system.users ("username", "hashedPassword", "isRole") VALUES ($1, $2, $3)`
newUsernames := make(map[string]bool)
for _, user := range users {
newUsernames[user[0].String()] = true
if _, err = executor.Exec(ctx, "insert-non-existent-users", txn, insertUser,
user[0], user[1], user[2]); err != nil {
return err
}
}

// We skip granting roles if the backup does not contain system.role_members.
if len(systemTables) == 1 {
return nil
}

selectNonExistentRoleMembers := "SELECT * FROM crdb_temp_system.role_members temp_rm WHERE " +
"NOT EXISTS (SELECT * FROM system.role_members rm WHERE temp_rm.role = rm.role AND temp_rm.member = rm.member)"
roleMembers, err := executor.QueryBuffered(ctx, "get-role-members",
txn, selectNonExistentRoleMembers)
if err != nil {
return err
}

insertRoleMember := `INSERT INTO system.role_members ("role", "member", "isAdmin") VALUES ($1, $2, $3)`
for _, roleMember := range roleMembers {
// Only grant roles to users that don't currently exist, i.e., new users we just added
if _, ok := newUsernames[roleMember[1].String()]; ok {
if _, err = executor.Exec(ctx, "insert-non-existent-role-members", txn, insertRoleMember,
roleMember[0], roleMember[1], roleMember[2]); err != nil {
return err
}
}
}
return nil
})
}

// restoreSystemTables atomically replaces the contents of the system tables
// with the data from the restored system tables.
func (r *restoreResumer) restoreSystemTables(
Expand Down
Loading

0 comments on commit 255c1fb

Please sign in to comment.