Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
69905: colexec: adds support for partial ordering in topk sorter r=rharding6373 a=rharding6373

Previously, topKSorter had to process all input rows before returning
the top K rows according to its specified ordering. If a subset of the
input rows were already ordered, topKSorter would still iterate over the
entire input.

However, if the input was partially ordered, topKSorter could
potentially stop iterating early, since after it has found K candidates
it is guaranteed not to find any better top candidates.

For example, take the following query and table with an index on a:

```
  a | b
----+----
  1 | 5
  2 | 3
  2 | 1
  3 | 3
  5 | 3

SELECT * FROM t ORDER BY a, b LIMIT 2
```

Given an index scan on a to provide `a`'s ordering, topk only needs to
process 3 rows in order to guarantee that it has found the top K rows.
Once it finishes processing the third row `[2, 1]`, all subsequent rows
have higher values of `a` than the top 2 rows found so far, and
therefore cannot be in the top 2 rows.

This change modifies the vectorized engine's TopKSorter signature to include
a partial ordering. The TopKSorter chunks the input according to the
sorted columns and processes each chunk with its existing heap
algorithm. At the end of each chunk, if K rows are in the heap,
TopKSorter emits the rows and stops execution.

A later commit, once merged with top K optimizer and distsql changes, will adjust the cost model for top K to reflect this change.

Release note: N/A

70285: log: add `version` field to `json` formatted log entries r=knz a=cameronnunez

Fixes [#70202](#70202).

Release note (cli change): version details have been added to all json formatted 
log entries. Refer to the reference docs for details about the field.

70380: backupccl: drop temp system database on failed restore r=irfansharif a=adityamaru

Previously, if a restore failed during execution
we would not cleanup the temproary system db descriptor
that we create during a cluster restore. A
`SHOW DATABASES` after the failed restore would show
the `crdb_temp_system` database as well.

This change adds logic to drop the database in the
OnFailOrCancel hook of the retore job.

Fixes: #70324

Release note: None

70452: backupccl: fix error when restoring a table that references a type defined in user-defined schema r=gh-casper a=gh-casper

Previously, restore would fail if trying to restore a table that references a type defined in a user defined schema in a new database.

This change adds logic to use ID of the schema in the restoring DB for a type if this schema has the same name as in the DB backed up.

Fixes: #70168

Release note: none

70472: sql: include ON UPDATE on CREATE TABLE LIKE for INCLUDING DEFAULTS r=rafiss a=otan

Resolves #69258

Release note (sql change): CREATE TABLE ... LIKE ... now copies ON
UPDATE definitions for INCLUDING DEFAULTS.

70500: vendor: bump Pebble to 9509dcb7a53a r=sumeerbhola a=jbowens

```
9509dcb compaction: fix nil pointer during errored compactions
d27f1d7 internal/base: add SetWithDelete key kind
971533d base: add `InternalKeyKindSeparator`
3f0c125 cmd/pebble: specify format major version
```

Informs #70443.

Release note: none

70511: authors: add Jon Tsiros to authors r=jtsiros a=jtsiros

Release note: None

70518: authors: add mbookham7 to authors r=mbookham7 a=mbookham7

Release note: None

70525: sql: interleaved tables notice was incorrectly labeled as an error r=fqazi a=fqazi

Previously, interleaved tables were only deprecated, and
later on we fully dropped support for them returning a new
notice with the word "error" indicating they were no-ops. This
was inadequates because the message is not fatal and only a notice.
To address this, this patch will change them message type to notice.

Release note: None

Co-authored-by: rharding6373 <rharding6373@users.noreply.github.com>
Co-authored-by: Cameron Nunez <cameron@cockroachlabs.com>
Co-authored-by: Aditya Maru <adityamaru@gmail.com>
Co-authored-by: Casper <casper@cockroachlabs.com>
Co-authored-by: Oliver Tan <otan@cockroachlabs.com>
Co-authored-by: Jackson Owens <jackson@cockroachlabs.com>
Co-authored-by: Jon Tsiros <tsiros@cockroachlabs.com>
Co-authored-by: Mike Bookham <bookham@cockroachlabs.com>
Co-authored-by: Faizan Qazi <faizan@cockroachlabs.com>
  • Loading branch information
10 people committed Sep 21, 2021
10 parents 1f345ec + afc78d5 + 2662eda + 2b4c072 + 7da0dd7 + f04a95c + 4a3fb40 + f9197f9 + 9481fb5 + 12a943b commit af43657
Show file tree
Hide file tree
Showing 46 changed files with 1,369 additions and 237 deletions.
2 changes: 2 additions & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -220,6 +220,7 @@ Joseph Botros <jrbotros@gmail.com>
Joseph Lowinske <joe@cockroachlabs.com>
Joseph Nickdow <joseph@cockroachlabs.com>
Joy Tao <joy@poptip.com>
Jon Tsiros <tsiros@cockroachlabs.com>
jqmp <jaqueramaphan@gmail.com>
Juan Leon <juan.leon@gmail.com> <juan@cockroachlabs.com>
Justin Jaffray <justin.jaffray@gmail.com> <justin@cockroachlabs.com>
Expand Down Expand Up @@ -275,6 +276,7 @@ mbonaci <mbonaci@gmail.com>
Michael Butler <butler@cockroachlabs.com>
Michael Erickson <michae2@cockroachlabs.com>
Miguel Novelo <miguel.novelo@digitalonus.com> <jmnovelov@gmail.com>
Mike Bookham <bookham@cockroachlabs.com>
mike czabator <michaelc@cockroachlabs.com>
Mike Lewis <mike@cockroachlabs.com>
Mo Firouz <mofirouz@mofirouz.com>
Expand Down
4 changes: 2 additions & 2 deletions DEPS.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -651,8 +651,8 @@ def go_deps():
name = "com_github_cockroachdb_pebble",
build_file_proto_mode = "disable_global",
importpath = "github.com/cockroachdb/pebble",
sum = "h1:NBzearGbADR609fptc7ATDj14I2Gji7ne2mphj0j+r4=",
version = "v0.0.0-20210914174700-0f7e73483566",
sum = "h1:BV8fDvAogQeaAUCgx/8/6J/Sv/enyfkJ10l5bTAcQWI=",
version = "v0.0.0-20210921140715-9509dcb7a53a",
)

go_repository(
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -806,6 +806,7 @@ EXECGEN_TARGETS = \
pkg/sql/colexec/rowstovec.eg.go \
pkg/sql/colexec/select_in.eg.go \
pkg/sql/colexec/sort.eg.go \
pkg/sql/colexec/sorttopk.eg.go \
pkg/sql/colexec/sort_partitioner.eg.go \
pkg/sql/colexec/substring.eg.go \
pkg/sql/colexec/values_differ.eg.go \
Expand Down
4 changes: 4 additions & 0 deletions docs/generated/logformats.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,6 +332,7 @@ Each entry contains at least the following fields:
| `line` | The line number where the event was emitted in the source. |
| `redactable` | Whether the payload is redactable (see below for details). |
| `timestamp` | The timestamp at which the event was emitted on the logging channel. |
| `version` | The binary version with which the event was generated. |


After a couple of *header* entries written at the beginning of each log sink,
Expand Down Expand Up @@ -389,6 +390,7 @@ Each entry contains at least the following fields:
| `l` | The line number where the event was emitted in the source. |
| `r` | Whether the payload is redactable (see below for details). |
| `t` | The timestamp at which the event was emitted on the logging channel. |
| `v` | The binary version with which the event was generated. |


After a couple of *header* entries written at the beginning of each log sink,
Expand Down Expand Up @@ -447,6 +449,7 @@ Each entry contains at least the following fields:
| `line` | The line number where the event was emitted in the source. |
| `redactable` | Whether the payload is redactable (see below for details). |
| `timestamp` | The timestamp at which the event was emitted on the logging channel. |
| `version` | The binary version with which the event was generated. |


After a couple of *header* entries written at the beginning of each log sink,
Expand Down Expand Up @@ -505,6 +508,7 @@ Each entry contains at least the following fields:
| `l` | The line number where the event was emitted in the source. |
| `r` | Whether the payload is redactable (see below for details). |
| `t` | The timestamp at which the event was emitted on the logging channel. |
| `v` | The binary version with which the event was generated. |


After a couple of *header* entries written at the beginning of each log sink,
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ require (
github.com/cockroachdb/go-test-teamcity v0.0.0-20191211140407-cff980ad0a55
github.com/cockroachdb/gostdlib v1.13.0
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f
github.com/cockroachdb/pebble v0.0.0-20210914174700-0f7e73483566
github.com/cockroachdb/pebble v0.0.0-20210921140715-9509dcb7a53a
github.com/cockroachdb/redact v1.1.3
github.com/cockroachdb/returncheck v0.0.0-20200612231554-92cdbca611dd
github.com/cockroachdb/sentry-go v0.6.1-cockroachdb.2
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -280,8 +280,8 @@ github.com/cockroachdb/gostdlib v1.13.0 h1:TzSEPYgkKDNei3gbLc0rrHu4iHyBp7/+NxPOF
github.com/cockroachdb/gostdlib v1.13.0/go.mod h1:eXX95p9QDrYwJfJ6AgeN9QnRa/lqqid9LAzWz/l5OgA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f h1:o/kfcElHqOiXqcou5a3rIlMc7oJbMQkeLk0VQJ7zgqY=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
github.com/cockroachdb/pebble v0.0.0-20210914174700-0f7e73483566 h1:NBzearGbADR609fptc7ATDj14I2Gji7ne2mphj0j+r4=
github.com/cockroachdb/pebble v0.0.0-20210914174700-0f7e73483566/go.mod h1:JXfQr3d+XO4bL1pxGwKKo09xylQSdZ/mpZ9b2wfVcPs=
github.com/cockroachdb/pebble v0.0.0-20210921140715-9509dcb7a53a h1:BV8fDvAogQeaAUCgx/8/6J/Sv/enyfkJ10l5bTAcQWI=
github.com/cockroachdb/pebble v0.0.0-20210921140715-9509dcb7a53a/go.mod h1:JXfQr3d+XO4bL1pxGwKKo09xylQSdZ/mpZ9b2wfVcPs=
github.com/cockroachdb/redact v1.0.8/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/redact v1.1.0/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/redact v1.1.3 h1:AKZds10rFSIj7qADf0g46UixK8NNLwWTNdCIGS5wfSQ=
Expand Down
32 changes: 24 additions & 8 deletions pkg/build/info.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,11 @@ var (
cgoTargetTriple string
platform = fmt.Sprintf("%s %s", runtime.GOOS, runtime.GOARCH)
// Distribution is changed by the CCL init-time hook in non-APL builds.
Distribution = "OSS"
typ string // Type of this build: <empty>, "development", or "release"
channel = "unknown"
envChannel = envutil.EnvOrDefaultString("COCKROACH_CHANNEL", "unknown")
Distribution = "OSS"
typ string // Type of this build: <empty>, "development", or "release"
channel = "unknown"
envChannel = envutil.EnvOrDefaultString("COCKROACH_CHANNEL", "unknown")
binaryVersion = computeVersion(tag)
)

// IsRelease returns true if the binary was produced by a "release" build.
Expand All @@ -52,8 +53,21 @@ func SeemsOfficial() bool {
return channel == "official-binary" || channel == "source-archive"
}

// VersionPrefix returns the version prefix of the current build.
func VersionPrefix() string {
func computeVersion(tag string) string {
v, err := version.Parse(tag)
if err != nil {
return "dev"
}
return v.String()
}

// BinaryVersion returns the version prefix, patch number and metadata of the current build.
func BinaryVersion() string {
return binaryVersion
}

// BinaryVersionPrefix returns the version prefix of the current build.
func BinaryVersionPrefix() string {
v, err := version.Parse(tag)
if err != nil {
return "dev"
Expand Down Expand Up @@ -137,11 +151,13 @@ func GetInfo() Info {
// TestingOverrideTag allows tests to override the build tag.
func TestingOverrideTag(t string) func() {
prev := tag
prevVersion := binaryVersion
tag = t
return func() { tag = prev }
binaryVersion = computeVersion(tag)
return func() { tag = prev; binaryVersion = prevVersion }
}

// MakeIssueURL produces a URL to a CockroachDB issue.
func MakeIssueURL(issue int) string {
return fmt.Sprintf("https://go.crdb.dev/issue-v/%d/%s", issue, VersionPrefix())
return fmt.Sprintf("https://go.crdb.dev/issue-v/%d/%s", issue, BinaryVersionPrefix())
}
92 changes: 92 additions & 0 deletions pkg/ccl/backupccl/backup_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2187,6 +2187,46 @@ func TestRestoreFailDatabaseCleanup(t *testing.T) {
)
}

func TestRestoreFailCleansUpTempSystemDatabase(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)

_, _, sqlDB, dir, cleanup := BackupRestoreTestSetup(t, singleNode, 0, InitManualReplication)
defer cleanup()

// Create a database with a type and table.
sqlDB.Exec(t, `
CREATE DATABASE d;
CREATE TYPE d.ty AS ENUM ('hello');
CREATE TABLE d.tb (x d.ty);
INSERT INTO d.tb VALUES ('hello'), ('hello');
`)

// Cluster BACKUP.
sqlDB.Exec(t, `BACKUP TO $1`, LocalFoo)

// Bugger the backup by removing the SST files.
if err := filepath.Walk(dir+"/foo", func(path string, info os.FileInfo, err error) error {
if err != nil {
t.Fatal(err)
}
if info.Name() == backupManifestName || !strings.HasSuffix(path, ".sst") {
return nil
}
return os.Remove(path)
}); err != nil {
t.Fatal(err)
}

_, _, sqlDBRestore, cleanupRestore := backupRestoreTestSetupEmpty(t, singleNode, dir, InitManualReplication,
base.TestClusterArgs{})
defer cleanupRestore()
// We should get an error when restoring the table.
sqlDBRestore.ExpectErr(t, "sst: no such file", `RESTORE FROM $1`, LocalFoo)
row := sqlDBRestore.QueryStr(t, fmt.Sprintf(`SELECT * FROM [SHOW DATABASES] WHERE database_name = '%s'`, restoreTempSystemDB))
require.Equal(t, 0, len(row))
}

func TestBackupRestoreUserDefinedSchemas(t *testing.T) {
defer leaktest.AfterTest(t)()
defer log.Scope(t).Close(t)
Expand Down Expand Up @@ -2867,6 +2907,58 @@ INSERT INTO d.t2 VALUES (ARRAY['hello']);
}
})

// Test cases where we attempt to remap types in the backup to types that
// already exist in the cluster with user defined schema.
t.Run("backup-remap-uds", func(t *testing.T) {
_, _, sqlDB, _, cleanupFn := BackupRestoreTestSetup(t, singleNode, 0, InitManualReplication)
defer cleanupFn()
sqlDB.Exec(t, `
CREATE DATABASE d;
CREATE SCHEMA d.s;
CREATE TYPE d.s.greeting AS ENUM ('hello', 'howdy', 'hi');
CREATE TABLE d.s.t (x d.s.greeting);
INSERT INTO d.s.t VALUES ('hello'), ('howdy');
CREATE TYPE d.s.farewell AS ENUM ('bye', 'cya');
CREATE TABLE d.s.t2 (x d.s.greeting[]);
INSERT INTO d.s.t2 VALUES (ARRAY['hello']);
`)
{
// Backup and restore t.
sqlDB.Exec(t, `BACKUP TABLE d.s.t TO $1`, LocalFoo+"/1")
sqlDB.Exec(t, `DROP TABLE d.s.t`)
sqlDB.Exec(t, `RESTORE TABLE d.s.t FROM $1`, LocalFoo+"/1")

// Check that the table data is restored correctly and the types aren't touched.
sqlDB.CheckQueryResults(t, `SELECT 'hello'::d.s.greeting, ARRAY['hello']::d.s.greeting[]`, [][]string{{"hello", "{hello}"}})
sqlDB.CheckQueryResults(t, `SELECT * FROM d.s.t ORDER BY x`, [][]string{{"hello"}, {"howdy"}})

// d.t should be added as a back reference to greeting.
sqlDB.ExpectErr(t, `pq: cannot drop type "greeting" because other objects \(.*\) still depend on it`, `DROP TYPE d.s.greeting`)
}

{
// Test that backing up and restoring a table with just the array type
// will remap types appropriately.
sqlDB.Exec(t, `BACKUP TABLE d.s.t2 TO $1`, LocalFoo+"/2")
sqlDB.Exec(t, `DROP TABLE d.s.t2`)
sqlDB.Exec(t, `RESTORE TABLE d.s.t2 FROM $1`, LocalFoo+"/2")
sqlDB.CheckQueryResults(t, `SELECT 'hello'::d.s.greeting, ARRAY['hello']::d.s.greeting[]`, [][]string{{"hello", "{hello}"}})
sqlDB.CheckQueryResults(t, `SELECT * FROM d.s.t2 ORDER BY x`, [][]string{{"{hello}"}})
}

{
// Create another database with compatible types.
sqlDB.Exec(t, `CREATE DATABASE d2`)
sqlDB.Exec(t, `CREATE SCHEMA d2.s`)
sqlDB.Exec(t, `CREATE TYPE d2.s.greeting AS ENUM ('hello', 'howdy', 'hi')`)

// Now restore t into this database. It should remap d.greeting to d2.greeting.
sqlDB.Exec(t, `RESTORE TABLE d.s.t FROM $1 WITH into_db = 'd2'`, LocalFoo+"/1")
// d.t should be added as a back reference to greeting.
sqlDB.ExpectErr(t, `pq: cannot drop type "greeting" because other objects \(.*\) still depend on it`, `DROP TYPE d2.s.greeting`)
}
})

t.Run("incremental", func(t *testing.T) {
_, _, sqlDB, _, cleanupFn := BackupRestoreTestSetup(t, singleNode, 0, InitManualReplication)
defer cleanupFn()
Expand Down
32 changes: 26 additions & 6 deletions pkg/ccl/backupccl/restore_job.go
Original file line number Diff line number Diff line change
Expand Up @@ -1503,6 +1503,7 @@ func (r *restoreResumer) Resume(ctx context.Context, execCtx interface{}) error
func (r *restoreResumer) doResume(ctx context.Context, execCtx interface{}) error {
details := r.job.Details().(jobspb.RestoreDetails)
p := execCtx.(sql.JobExecContext)
r.execCfg = p.ExecCfg()

backupManifests, latestBackupManifest, sqlDescs, err := loadBackupSQLDescs(
ctx, p, details, details.Encryption,
Expand Down Expand Up @@ -1565,7 +1566,6 @@ func (r *restoreResumer) doResume(ctx context.Context, execCtx interface{}) erro
return err
}
}
r.execCfg = p.ExecCfg()
var remappedStats []*stats.TableStatisticProto
backupStats, err := getStatisticsFromBackup(ctx, defaultStore, details.Encryption,
latestBackupManifest)
Expand Down Expand Up @@ -1717,7 +1717,7 @@ func (r *restoreResumer) doResume(ctx context.Context, execCtx interface{}) erro
// Reload the details as we may have updated the job.
details = r.job.Details().(jobspb.RestoreDetails)

if err := r.cleanupTempSystemTables(ctx); err != nil {
if err := r.cleanupTempSystemTables(ctx, nil /* txn */); err != nil {
return err
}
}
Expand Down Expand Up @@ -2085,7 +2085,17 @@ func (r *restoreResumer) OnFailOrCancel(ctx context.Context, execCtx interface{}
return err
}
}
return r.dropDescriptors(ctx, execCfg.JobRegistry, execCfg.Codec, txn, descsCol)

if err := r.dropDescriptors(ctx, execCfg.JobRegistry, execCfg.Codec, txn, descsCol); err != nil {
return err
}

if details.DescriptorCoverage == tree.AllDescriptors {
// The temporary system table descriptors should already have been dropped
// in `dropDescriptors` but we still need to drop the temporary system db.
return r.cleanupTempSystemTables(ctx, txn)
}
return nil
}); err != nil {
return err
}
Expand Down Expand Up @@ -2535,16 +2545,26 @@ func (r *restoreResumer) restoreSystemTables(
return nil
}

func (r *restoreResumer) cleanupTempSystemTables(ctx context.Context) error {
func (r *restoreResumer) cleanupTempSystemTables(ctx context.Context, txn *kv.Txn) error {
executor := r.execCfg.InternalExecutor
// Check if the temp system database has already been dropped. This can happen
// if the restore job fails after the system database has cleaned up.
checkIfDatabaseExists := "SELECT database_name FROM [SHOW DATABASES] WHERE database_name=$1"
if row, err := executor.QueryRow(ctx, "checking-for-temp-system-db" /* opName */, txn, checkIfDatabaseExists, restoreTempSystemDB); err != nil {
return errors.Wrap(err, "checking for temporary system db")
} else if row == nil {
// Temporary system DB might already have been dropped by the restore job.
return nil
}

// After restoring the system tables, drop the temporary database holding the
// system tables.
gcTTLQuery := fmt.Sprintf("ALTER DATABASE %s CONFIGURE ZONE USING gc.ttlseconds=1", restoreTempSystemDB)
if _, err := executor.Exec(ctx, "altering-gc-ttl-temp-system" /* opName */, nil /* txn */, gcTTLQuery); err != nil {
if _, err := executor.Exec(ctx, "altering-gc-ttl-temp-system" /* opName */, txn, gcTTLQuery); err != nil {
log.Errorf(ctx, "failed to update the GC TTL of %q: %+v", restoreTempSystemDB, err)
}
dropTableQuery := fmt.Sprintf("DROP DATABASE %s CASCADE", restoreTempSystemDB)
if _, err := executor.Exec(ctx, "drop-temp-system-db" /* opName */, nil /* txn */, dropTableQuery); err != nil {
if _, err := executor.Exec(ctx, "drop-temp-system-db" /* opName */, txn, dropTableQuery); err != nil {
return errors.Wrap(err, "dropping temporary system db")
}
return nil
Expand Down
12 changes: 10 additions & 2 deletions pkg/ccl/backupccl/restore_planning.go
Original file line number Diff line number Diff line change
Expand Up @@ -718,12 +718,20 @@ func allocateDescriptorRewrites(
}

// See if there is an existing type with the same name.
getParentSchemaID := func(typ *typedesc.Mutable) (parentSchemaID descpb.ID) {
parentSchemaID = typ.GetParentSchemaID()
// If we find UDS with same name defined in the restoring DB, use its ID instead.
if rewrite, ok := descriptorRewrites[parentSchemaID]; ok && rewrite.ID != 0 {
parentSchemaID = rewrite.ID
}
return
}
desc, err := catalogkv.GetDescriptorCollidingWithObject(
ctx,
txn,
p.ExecCfg().Codec,
parentID,
typ.GetParentSchemaID(),
getParentSchemaID(typ),
typ.Name,
)
if err != nil {
Expand All @@ -744,7 +752,7 @@ func allocateDescriptorRewrites(
// Ensure that there isn't a collision with the array type name.
arrTyp := typesByID[typ.ArrayTypeID]
typeName := tree.NewUnqualifiedTypeName(arrTyp.GetName())
err := catalogkv.CheckObjectCollision(ctx, txn, p.ExecCfg().Codec, parentID, typ.GetParentSchemaID(), typeName)
err = catalogkv.CheckObjectCollision(ctx, txn, p.ExecCfg().Codec, parentID, getParentSchemaID(typ), typeName)
if err != nil {
return errors.Wrapf(err, "name collision for %q's array type", typ.Name)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/ccl/cliccl/demo.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ func getLicense(clusterID uuid.UUID) (string, error) {
q := req.URL.Query()
// Let the endpoint know we are requesting a demo license.
q.Add("kind", "demo")
q.Add("version", build.VersionPrefix())
q.Add("version", build.BinaryVersionPrefix())
q.Add("clusterid", clusterID.String())
req.URL.RawQuery = q.Encode()

Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/logictestccl/testdata/logic_test/partitioning
Original file line number Diff line number Diff line change
Expand Up @@ -97,10 +97,10 @@ CREATE TABLE interleave_root (a INT PRIMARY KEY) PARTITION BY LIST (a) (
PARTITION p0 VALUES IN (0)
)

statement notice ERROR: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
statement notice NOTICE: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
CREATE TABLE interleave_child (a INT PRIMARY KEY) INTERLEAVE IN PARENT interleave_root (a)

statement notice ERROR: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
statement notice NOTICE: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
CREATE TABLE t (a INT PRIMARY KEY) INTERLEAVE IN PARENT interleave_root (a) PARTITION BY LIST (a) (
PARTITION p0 VALUES IN (0)
)
Expand Down
4 changes: 2 additions & 2 deletions pkg/ccl/logictestccl/testdata/logic_test/regional_by_row
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ LOCALITY REGIONAL BY ROW
statement ok
CREATE TABLE parent_table (pk INT PRIMARY KEY)

statement notice ERROR: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
statement notice NOTICE: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
CREATE TABLE regional_by_row_table_int (
pk INT NOT NULL PRIMARY KEY
)
Expand Down Expand Up @@ -348,7 +348,7 @@ CREATE INDEX bad_idx ON regional_by_row_table(a) USING HASH WITH BUCKET_COUNT =
statement error hash sharded indexes are not compatible with REGIONAL BY ROW tables
ALTER TABLE regional_by_row_table ALTER PRIMARY KEY USING COLUMNS(pk2) USING HASH WITH BUCKET_COUNT = 8

statement notice ERROR: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
statement notice NOTICE: creation of new interleaved tables or interleaved indexes is no longer supported and will be ignored. For details, see https://www.cockroachlabs.com/docs/releases/v20.2.0#deprecations
CREATE INDEX bad_idx_int ON regional_by_row_table(pk) INTERLEAVE IN PARENT parent_table(pk)

statement ok
Expand Down
Loading

0 comments on commit af43657

Please sign in to comment.