-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql: Internal Error when creating type in SHOW JOBS #97362
Comments
Hello, I am Blathers. I am here to help you get the issue triaged. Hoot - a bug! Though bugs are the bane of my existence, rest assured the wretched thing will get the best of care here. I have CC'd a few people who may be able to assist you:
If we have not gotten back to your issue within a few business days, you can try the following:
🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
cc @cockroachdb/cdc |
This adds early erroring of WITH clauses with no columns so we don't try to build a statement with more annotations than the main statement, which can cause an internal error. Fixes: cockroachdb#97362 Release note (bug fix): Rare internal errors in SHOW JOBS statements which have a WITH clause are fixed.
This updates the delegator, which parses textual SQL statements which represent specific DDL statements, so that the `Annotations` slice allocated in `planner.semaCtx` matches the actual number of annotations built during parsing (if the delegator successfully built a statement). Fixes: cockroachdb#97362 Release note (bug fix): Rare internal errors in SHOW JOBS statements which have a WITH clause are fixed.
This updates the delegator, which parses textual SQL statements which represent specific DDL statements, so that the `Annotations` slice allocated in `planner.semaCtx` matches the actual number of annotations built during parsing (if the delegator successfully built a statement). Fixes: cockroachdb#97362 Release note (bug fix): Rare internal errors in SHOW JOBS statements which have a WITH clause are fixed.
98259: roachtest: add an option to create nodes with low memory per core r=lidorcarmel a=lidorcarmel Currently in roachtest we can create VMs with a default RAM per core, or we can request for highmem machines. This patch is adding an option to ask for highcpu (lower RAM per core) machines. Previously, on GCE: - Creating <= 16 core VM created VMs with standard memory per core (~4GM). - Creating a VM with more cores used 'highcpu' nodes (~1GB per core). - Using the `HighMem` option created a VM with high memory per core (~6.5GB). - There was no option to create, for example, a 32 core VM with standard memory (~4GB) - only low mem or high mem. With this patch the test writer can pick any low/standard/high memory per core ratio. The initial need is to run restore performance benchmarks with reduced memory to verify nodes don't OOM in that environment. Running with n1-highcpu-8 is not ideal but nodes should not OOM. We can also, with this change, run with 32 cores and 'standard' RAM (n1-standard-32). Epic: none Release note: None 98389: delegate: make actual number of annotations used by delegator r=msirek a=msirek This updates the delegator, which parses textual SQL statements which represent specific DDL statements, so that the `Annotations` slice allocated in `planner.semaCtx` matches the actual number of annotations built during parsing (if the delegator successfully built a statement). Fixes: #97362 Release note (bug fix): Rare internal errors in SHOW JOBS statements which have a WITH clause are fixed. 98402: roachtest: make 'testImpl.Skip/f' behave consistently with 'TestSpec.… r=smg260 a=smg260 roachtest: make 'testImpl.Skip/f' behave consistently with 'TestSpec.Skip' This will show tests skipped via t.Skip() as ignored in the TC UI, with the caveat that it will show the test as having run twice. See inline comment for details. resolves: #96351 Release note: None Epic: None 98648: roachtest: scale kv concurrency inversely with batch size r=kvoli a=nvanbenschoten Informs #96800. This commit updates the kv roachtest suite to scale the workload concurrency inversely with batch size. This ensures that we to account for the cost of each operation, recognizing that the system can handle fewer concurrent operations as the cost of each operation grows. We currently have three kv benchmark variants that set a non-default batch size: - `kv0/enc=false/nodes=3/batch=16` - `kv50/enc=false/nodes=4/cpu=96/batch=64` - `kv95/enc=false/nodes=3/batch=16` Without this change, these tests badly overload their clusters. This leads to p50 latencies in the 300-400ms range, about 10x greater than the corresponding p50 in a non-overloaded cluster. In this degraded regime, performance is unstable and non-representative. Customers don't run clusters at this level of overload and maximizing throughput once we hit the latency cliff is no longer a goal. By reducing the workload concurrency to avoid from overload, we reduce throughput by about 10% and reduce p50 and p99 latency by about 90%. Release note: None 98806: changefeedccl: add WITH key_column option r=[jayshrivastava] a=HonoreDB Changefeeds running on an outbox table see a synthetic primary key that isn't useful for downstream partitioning. This PR adds an encoder option to use a different column as the key, not in internal logic, but only in message metadata. This breaks end-to-end ordering because we're only ordered with respect to the actual primary key, and the sink will only order with respect to the key we emit. We therefore require the unordered flag here. Closes #54461. Release note (enterprise change): Added the WITH key_column option to override the key used in message metadata. This changes the key hashed to determine Kafka partitions. It does not affect the output of key_in_value or the domain of the per-key ordering guarantee. 98824: ci: create bazel-fips docker image r=healthy-pod a=rail Previously, we use cockroachdb/bazel image to build and run our tests. In order to run FIPS tests, the image has to use particular FIPS-enabled packages. This PR adds a new bazelbuilder image, that uses FIPS-compliant packages. * Added `build/.bazelbuilderversion-fips` file. * Added `run_bazel_fips` wrapper. * The image builder script uses `dpkg-repack` to reproduce the same packages. Epic: DEVINF-478 Release note: None 98856: jobspb: mark the key visualizer job as automatic r=zachlite a=zachlite Epic: None Release note: None Co-authored-by: Lidor Carmel <lidor@cockroachlabs.com> Co-authored-by: Mark Sirek <sirek@cockroachlabs.com> Co-authored-by: Miral Gadani <miral@cockroachlabs.com> Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com> Co-authored-by: Aaron Zinger <zinger@cockroachlabs.com> Co-authored-by: Rail Aliiev <rail@iqchoice.com> Co-authored-by: zachlite <zachlite@gmail.com>
Describe the problem
The latest alpha version of the CockroachDB (
v23.1.0-alpha.2
) encounters Internal Error when executing the following query.To Reproduce
./cockroach demo
, and then paste the PoC query to the cockroach cli environment.Internal Error
and log the stack information.Additional data / screenshots
Here is the stack trace from
v23.1.0-alpha.2
:Environment:
./cockroach demo
)Additional context
The bug can also be reproduced in
v23.1.0-alpha.2
,v23.1.0-alpha.1
,v22.1.13
,v22.2.3
andv21.2.17
. We haven't tested earlier versions beforev21.2.17
.Jira issue: CRDB-24655
The text was updated successfully, but these errors were encountered: