We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd to run clickhouse-backup -c /etc/clickhouse-backup/config-test.yml watch --tables=dwh_meta.* VERSION: 2.1.2 last log before quit
clickhouse-backup -c /etc/clickhouse-backup/config-test.yml watch --tables=dwh_meta.*
2022/11/24 16:53:26.324808 info clickhouse connection open: tcp://localhost:9005 logger=clickhouse 2022/11/24 16:53:26.324832 info SELECT * FROM system.disks; logger=clickhouse [clickhouse]host(s)=localhost:9005, database=system, username=default [clickhouse][dial] secure=false, skip_verify=false, strategy=random, ident=45, server=0 -> 127.0.0.1:9005 [clickhouse][connect=45][hello] -> Golang SQLDriver 1.1.54213 [clickhouse][connect=45][hello] <- ClickHouse 22.10.54460 (Europe/Moscow) [clickhouse][connect=45][prepare] SELECT * FROM system.disks; [clickhouse][connect=45][send query] SELECT * FROM system.disks; [clickhouse][connect=45][query settings] connect_timeout=900&receive_timeout=900&send_timeout=900 [clickhouse][connect=45][send external tables] count 0 [clickhouse][connect=45][read meta] <- data: packet=1, columns=9, rows=0 [clickhouse][connect=45][rows] <- data: packet=1, columns=9, rows=1, elapsed=18.873µs [clickhouse][connect=45][rows] <- profiling: rows=1, bytes=37760, blocks=1 [clickhouse][connect=45][rows] <- progress: rows=1, bytes=93, total rows=0 [clickhouse][connect=45][rows] <- data: packet=1, columns=0, rows=0, elapsed=750ns [clickhouse][connect=45][rows] <- progress: rows=0, bytes=0, total rows=0 [clickhouse][connect=45][rows] <- end of stream [clickhouse][connect=45][rows] close [clickhouse][connect=45][stmt] close 2022/11/24 16:53:26.327705 debug remove '/clickhouse/backup/test-full-20221124135314' logger=RemoveBackupLocal 2022/11/24 16:53:26.329460 info done backup=test-full-20221124135314 duration=7ms location=local logger=RemoveBackupLocal operation=delete 2022/11/24 16:53:26.329487 info clickhouse connection closed logger=clickhouse 2022/11/24 16:55:14.309496 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' logger=clickhouse 2022/11/24 16:55:14.309553 info clickhouse connection closed logger=clickhouse 2022/11/24 16:55:14.309581 error sql: database is closed
my config
general: remote_storage: sftp disable_progress_bar: True backups_to_keep_local: 7 backups_to_keep_remote: 7 watch_backup_name_template: test-{type}-{time:20060102150405} log_level: debug watch_interval: 2m full_interval: 14m clickhouse: username: default password: pass host: localhost port: 9005 data_path:. skip_tables: ['system.*'] timeout: 15m freeze_by_part: False debug: true sftp: address: backuper.local username: root password:. key: /root/.ssh/id_rsa path: /backup_dwh/clickhouse01-backup concurrency: 5 compression_format: tar compression_level: 1
The text was updated successfully, but these errors were encountered:
Thank you so much for reporting, will try to fix ASAP
Sorry, something went wrong.
Slach
Successfully merging a pull request may close this issue.
cmd to run
clickhouse-backup -c /etc/clickhouse-backup/config-test.yml watch --tables=dwh_meta.*
VERSION:
2.1.2
last log before quit
my config
The text was updated successfully, but these errors were encountered: