Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cli/dump: dump produces inserts that are too large to be restored #31676

Closed
isoos opened this issue Oct 21, 2018 · 9 comments
Closed

cli/dump: dump produces inserts that are too large to be restored #31676

isoos opened this issue Oct 21, 2018 · 9 comments
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-community Originated from the community T-disaster-recovery

Comments

@isoos
Copy link

isoos commented Oct 21, 2018

I've created a database dump using cockroach dump [database] >output.sql. However I'm not able to restore it, neither with native psql nor with the cockroach sql commands, most likely because the generated inserts are too large.

One of the tables there has rows with binary data, 1MB in size each, and the dump seems to create insert statements in the batch of 100 rows, regardless of the size. I think the inserts fail because the aggregated size is larger than the 64M limit.

@knz
Copy link
Contributor

knz commented Oct 22, 2018

Oh that is nicely spotted.

As a workaround, you can modify insertRows in pkg/cli/dump.go and re-build to get a custom cockroach dump with a different (eg lower) number of rows per insert.

The proper fix on our side will probably be to auto-detect a good size.

@knz knz added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-community Originated from the community A-disaster-recovery A-cli labels Oct 22, 2018
@knz knz changed the title Unable to restore dump cli/dump: dump produces inserts that are too large to be restored Oct 22, 2018
@knz
Copy link
Contributor

knz commented Oct 22, 2018

cc @rolandcrosby @mjibson for triage and prioritization

@knz
Copy link
Contributor

knz commented Oct 22, 2018

Oh that is nicely spotted.

As a workaround, you can modify insertRows in pkg/cli/dump.go and re-build to get a custom cockroach dump with a different (eg lower) number of rows per insert.

The proper fix on our side will probably be to auto-detect a good size.

@tbg
Copy link
Member

tbg commented Oct 22, 2018

@mjibson, could you help István out?

@knz
Copy link
Contributor

knz commented Oct 22, 2018 via email

@knz
Copy link
Contributor

knz commented Nov 16, 2018

This is an extension of #28948.

@zsalab
Copy link

zsalab commented Nov 5, 2020

any news on this? maybe a workaround? can I help somehow? (I reported the #51969)

@knz
Copy link
Contributor

knz commented Nov 10, 2020

my comment from #31676 (comment) is still applicable

@dt
Copy link
Member

dt commented Mar 9, 2021

20.2 made proper binary BACKUP+RESTORE available for non-enterprise users and deprecated text-based dump, so we're not planning to go back and extend the deprecated dump functionality to add variable sized insert batches. Closing for now as wont-fix.

@dt dt closed this as completed Mar 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. O-community Originated from the community T-disaster-recovery
Projects
None yet
Development

No branches or pull requests

6 participants