-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing partitions on uploaded backup to S3 #203
Comments
could you share results of
? |
|
I am not sure why it thinks the backup is broken. debug_backup/metadata.json exists in the S3 bucket. Again i downloaded default_1.tar and extracted it. It only has parts from 199802 until 202007, but the local backup has much more:
|
Sorry, closed by accident. |
Thank you for detailed description. It is very useful. |
I have improved error handling on upload in aadce0b. Please try again with version from master. |
@AlexAkulov Good work! It turned out to be helpful:
Edit: Here are the current shell limits. This is CentOS 8:
I guess i can increase with ulimit -n prior running clickhouse-backup |
This is bug in clickhouse-backup. I tried to fix it in 65af2e0 |
Hi, thank you. As of commit 756ceac, everything is working perfectly for me. |
Thank you very much for debug! |
Hi,
I use v1.0.0-alpha2 because v0.6.5 doesn't work with ClickHouse 21.3 for me, gives some error about no tables to backup.
If i create a backup with "clickhouse-backup create", the local backup is good, all the partitions are there.
But once i do "clickhouse-backup upload " to upload this backup to S3, the uploaded backup is missing many partitions, of a like a full year. I verified that by downloading default_1.tar.zstd and extracting it.
The local backup is fine however.
Also, when uploading, the output is a little strange, it finishes too soon:
The text was updated successfully, but these errors were encountered: