-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: deprecate cockroach dump
#54040
Comments
Fixes: cockroachdb#54040 Release justification: low risk, high benefit changes to existing functionality
I currently use |
Hi @CyborgMaster, thanks for reaching out! We're aware of this use case and are in the process of outlining an alternative solution in time for the next release. You can track progress at #53488 and if you have any specific requirements about what you'd like to see in that new output dump, I'd encourage you to comment on the issue 🙂 |
Previously we maintained both BACKUP and dump as BACKUP was enterprise-only but in 20.2+ basic backup that can do at least as much as dump can is free so this is no longer a reason to keep dump. More detailed explanation at cockroachdb#54040. Fixes: cockroachdb#56405
Previously we maintained both BACKUP and dump as BACKUP was enterprise-only but in 20.2+ basic backup that can do at least as much as dump can is free so this is no longer a reason to keep dump. More detailed explanation at cockroachdb#54040. Fixes: cockroachdb#56405
56964: cli: remove cockroach dump from CRDB r=knz a=adityamaru Previously we maintained both BACKUP and dump as BACKUP was enterprise-only but in 20.2+ basic backup that can do at least as much as dump can is free so this is no longer a reason to keep dump. More detailed explanation at #54040. Fixes: #56405 Fixes: #28948 Co-authored-by: Aditya Maru <adityamaru@gmail.com>
What is currently the recommended method of backing up whole database to client machine? (the device I'm running the command from)
If the above is correct, it seems the only way to export all data to a local device is to write SQL statements to iterate over all existing tables to back up the data table by table with EXPORT. Is there a more streamlined method / am I overlooking something? Thanks |
On a second look, |
Hi @tomholub! Indeed, EXPORT writes the results of running a given query directly from the nodes running the query to cloud-storage, and to do so with the maximum throughput, it intentionally avoids sending the results all back to a single client in the middle. If you want to write the output of running a given query to a local file where you're running your client, you can use the client's To your original question: BACKUP is indeed only able to backup to a "storage location" which is typically cloud storage, but there are a couple options if you just want to run backup and then download the files it wrote locally without a cloud storage bucket in the middle. One option is to use the file system on one of your nodes e.g. Another option, if you're working with small data sizes and don't have direct access to the node file system e.g. via SCP, was introduced in 21.1 and is the ability to store "userfiles", including backup files, as bytes which are stored in regular SQL tables in the cluster itself and then read/written by the client using a SQL connection. This means you can Do either of those, or piped csv format, help? |
Thank you for the response - between these, I should be able to choose an option that's applicable to us. |
"cockroach dump", i think, is very convenient for dumping schema and records. and the output is human readable. |
As a new user, how can I backup my data into plain SQL? Backup and restore process should be easy. I want to make sure that my data is safe and I can validate that. On a import, cockroachdb processed |
Yeah, so I gave up trying to restore database dump from file system. I couldn't figure out how to upload a database backup to So the next best solution for me was to use custom s3-like hosted service. I stumbled across this post and used Minio. Launch a Docker container (or download an executable from a website):
Launch MinIO console at http://localhost:29191/, login with
|
Hey! I can help you figure out how to backup to |
Hey @adityamaru, can you please help me to upload folder
Also, can you please help me with PLAIN SQL backup? I want to make a database backup to a file, like |
What version of cockroach are you running on? |
[root@roach1 cockroach]# ./cockroach version |
There are two options when it comes to restoring from a "local" storage option, namely Since you do not have too many files in your backup I'm going to recommend uploading each file individually while maintaining the directory structure. This would look like:
And then for each sst file: Note, I have used Once you have done this you should be able to run the RESTORE query with You should be able to script the above upload statements. One of our Solution Engineers has put together this blog that might be useful https://blog.ervits.com/2021/07/recover-from-disaster-using-userfile.html. We realize this UX is not great at the moment, but with recursive upload coming soon this should become much more pleasant! |
We no longer support generating a single PGDUMP like file (DDL + DML) and recommend users rely on backup/restore as this is a battle-tested and well-maintained feature of CockroachDB. There are however a couple of other tools that might be of interest to you:
Do either of BACKUP/RESTORE, or the piped CSV solution outlined above help? |
@adityamaru |
The thing is: This strips off the possibility of encrypted backups, i.e. via Restic. Please get me right: Getting paid for convenience features is totally fine, but completely taking away any encryption capability is not. |
This functionality is duplicative with BACKUP and RESTORE, though they use more reliable, native representations for data and metadata. Ensuring all new SQL features like types, schemas, etc all work correctly in
dump
adds significant overhead to that feature work, or in the past is often missed, leading dump to be incomplete or broken. Previously we maintained both BACKUP and dump as BACKUP was enterprise-only but in 20.2+ basic backup that can do at least as much as dump can is free so this is no longer a reason to keep dump.Interoperability with other databases is also not a reason to keep
dump
around -- dump already does not produce data that can be directly loaded into another database so a more reliable migration would be to export to a vendor-neutral format like CSV and use the native loading facilities of the destination DB. If maintaining dump to keep up with just out own feature set has proven too expensive, keeping up with it against a foreign DB would be doubly so.The text was updated successfully, but these errors were encountered: