Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ATTACH MATERIALIZED VIEW + ON CLUSTER doesn't work as expected #388

Closed
shattl2000 opened this issue Feb 10, 2022 · 28 comments · Fixed by #395
Closed

ATTACH MATERIALIZED VIEW + ON CLUSTER doesn't work as expected #388

shattl2000 opened this issue Feb 10, 2022 · 28 comments · Fixed by #395
Assignees
Milestone

Comments

@shattl2000
Copy link

shattl2000 commented Feb 10, 2022

Hi,
I have a some problem with restore data from backup. When I try restore data from backup, restored only database without tables and data in they.
I use this commad for restore
clickhouse-backup restore 2022-02-10T07-24-02 --rm

When I try to restore a table from some database i get this error
clickhouse-backup restore 2022-02-10T07-24-02 --rm --table=bd2.bd2_sharded 2022/02/10 12:07:00.655248 error can't attach partitions for table 'bd2.bd2_sharded': code: 1000, message: Access to file denied: insufficient permissions: /var/lib/clickhouse/store/77a/77ae17d9-5adb-4c9c-a2de-2b1184e39c7d/detached/attaching_45ada9f87e7a3926379f51376de3c0e6_0_37_7

This issue occurs when using clickhouse-backup version 1.3.0.
When I use clickhouse-backup version 1.2.2 this problem doesn't exist. However, using version 1.2.2 does not create tables of type Distributed whose names match the name of the database.

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

How you download your backup to clickhouse server? Which type of remote_storage do you use in your config?

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

Could you share the following command?

id
ls -la /var/lib/clickhouse/backup/2022-02-10T07-24-02/shadow

@shattl2000
Copy link
Author

shattl2000 commented Feb 10, 2022

I downloaded backup with command
clickhouse-backup download 2022-02-10T07-24-02
All backups stored on s3. I was made backup and upload to s3 with command
clickhouse-backup create_remote

Remote storage in config set s3
remote_storage: s3

My result commands

id
uid=0(root) gid=0(root) группы=0(root)

ls -la /var/lib/clickhouse/backup/2022-02-10T07-24-02/shadow
итого 4
drwxr-x--- 11 root root  143 фев 10 10:26 .
drwxr-x---  4 root root   57 фев 10 10:26 ..
drwxr-x---  6 root root  192 фев 10 10:26 bd1
drwxr-x---  4 root root   51 фев 10 10:26 bd2
drwxr-x---  5 root root   86 фев 10 10:26 bd3
drwxr-x---  5 root root  123 фев 10 10:26 bd4
drwxr-x---  5 root root   86 фев 10 10:26 bd5
drwxr-x--- 22 root root 4096 фев 10 10:26 bd6
drwxr-x---  3 root root   31 фев 10 10:26 bd7
drwxr-x---  4 root root   72 фев 10 10:26 bd8
drwxr-x---  4 root root   47 фев 10 10:26 system

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

Could you share the following commands results?

ls -la /var/lib/clickhouse/data
ls -la /var/lib/clickhouse/store

@shattl2000
Copy link
Author

shattl2000 commented Feb 10, 2022

ls -la /var/lib/clickhouse/data
итого 0
drwxr-x--- 13 clickhouse clickhouse 171 фев 10 11:53 .
drwxrwx--- 16 clickhouse clickhouse 266 фев  9 17:28 ..
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd1
drwxr-x---  2 clickhouse clickhouse 204 фев 10 12:07 bd2
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 default
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd3
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 event
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd4
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd5
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd6
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd7
drwxr-x---  2 clickhouse clickhouse   6 фев 10 11:53 bd8
drwxr-x---  2 clickhouse clickhouse  47 фев  9 17:28 system
ls -la /var/lib/clickhouse/store
итого 0
drwxr-x--- 23 clickhouse clickhouse 237 фев 10 11:54 .
drwxrwx--- 16 clickhouse clickhouse 266 фев  9 17:28 ..
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 025
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 049
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 094
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 0af
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 143
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 14a
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 25e
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 335
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 342
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 58f
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 5c1
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 60c
drwxr-x---  3 clickhouse clickhouse  50 фев 10 12:07 77a
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 8ea
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 9c2
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 a5c
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 bad
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:54 bc7
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 ddc
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 f79
drwxr-x---  3 clickhouse clickhouse  50 фев 10 11:53 fe8

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

clickhouse-backup restore 2022-02-10T07-24-02 --rm
executed under root or under another user?

@shattl2000
Copy link
Author

executed under root

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

hmm,
could you share SELECT * FROM system.disks ?

clickhouse-backup restore should do chown operation during restore operation

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

As quick workaround, just run chown -R clickhouse:clickhouse /var/lib/clickhouse/backups

@shattl2000
Copy link
Author

select * from system.disks;

SELECT *
FROM system.disks

Query id: cdaba122-1634-44f7-9181-05f9169d9ad8

┌─name────┬─path─────────────────┬──free_space─┬──total_space─┬─keep_free_space─┬─type──┐
│ default │ /var/lib/clickhouse/ │ 97607118848 │ 343513497600 │               0 │ local │
└─────────┴──────────────────────┴─────────────┴──────────────┴─────────────────┴───────┘

1 rows in set. Elapsed: 0.002 sec.

@Slach
Copy link
Collaborator

Slach commented Feb 10, 2022

I attached debug build where add log for chown operation
clickhouse-backup.zip

unpack zip into /tmp/ folder
could you run and share results for the following command

LOG_LEVEL=debug /tmp/clickhouse-backup restore 2022-02-10T07-24-02 --rm --table=bd2.bd2_sharded

@shattl2000
Copy link
Author

shattl2000 commented Feb 10, 2022

I started restore with command
clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --schema

And I got a new error
warn can't create table 'bd6.table1': code: 62, message: Syntax error: failed at position 710 ('CLUSTER'): CLUSTER 'ch' (b.competition_id != '') AND (b.event_id = ''). Expected one of: UNION, LIMIT, WINDOW, LIKE, GLOBAL NOT IN, end of query, HAVING, AS, DIV, IS, GROUP BY, INTO OUTFILE, OR, QuestionMark, BETWEEN, OFFSET, NOT LIKE, MOD, AND, Comma, alias, ORDER BY, SETTINGS, IN, ILIKE, FORMAT, Dot, NOT ILIKE, WITH, NOT, Arrow, token, NOT IN, GLOBAL IN, will try again backup=clickhouse_backup_2022-02-10T15-21-12 operation=restore

When I run this command to create table in clickhouse it completes without error
"query": "ATTACH MATERIALIZED VIEW bd6.table1 UUID '27493bcc-f130-44c8-8e01-cdb87995321b' TO bd6.table2 (periodString,operator_idUInt64,competition_idString,gambling_type_idUInt32,bets_amountDecimal(16, 2),bets_countUInt64,is_deletedUInt8,updated_atDateTime64(3, 'UTC'),ver Int64) AS SELECT DISTINCT b.period AS period, b.operator_id AS operator_id, b.competition_id AS competition_id, b.gambling_type_id AS gambling_type_id, ifNull(b.bets_amount, toDecimal64(0, 2)) AS bets_amount, ifNull(b.bets_count, 0) AS bets_count, b.is_deleted AS is_deleted, b.updated_at AS updated_at, toUnixTimestamp64Milli(b.created_at) AS ver FROM bd6.table5 AS b WHERE (b.competition_id != '') AND (b.event_id = '')"

I took this command from a metadata file backup.
After manually creating the table, I ran the restore again
clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --data
And now I restored without errors.

@Slach
Copy link
Collaborator

Slach commented Feb 11, 2022

Sorry, I don't understand what you do exactly
we tried to resolve issue with failed attach table and wrong permissions

look like your last message is a separate issue?

could you turn on export LOG_LEVEL=debug (environment variable) or change config.yml

general:
  log_level: debug

and share the result of
LOG_LEVEL=debug clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --shema ?

@shattl2000
Copy link
Author

That's right, your last build you attached fixed that bug. The databases have been restored with tables. But now I got a new error that some tables could not be created, which I wrote about in a previous comment.

@Slach
Copy link
Collaborator

Slach commented Feb 11, 2022

Looks strange, cause attached build is the same as 1.3.0, just add log about Chown operation, did you run chown -R clickhouse:clickhouse /var/lib/clickhouse/backup before executing attached debug build?

Ok. let's check your tables and try figure out,
could you share the following command result?

cat /var/lib/backups/clickhouse_backup_2022-02-10T15-21-12/metadata/bd6/table1.json

@shattl2000
Copy link
Author

shattl2000 commented Feb 11, 2022

No, I did't run chown -R before executing attached build.

cat /var/lib/clickhouse/backup/clickhouse_backup_2022-02-10T15-21-12/metadata/bd6/table1.json
{
        "table": "table1",
        "database": "bd6",
        "parts": {},
        "query": "ATTACH MATERIALIZED VIEW bd6.table1 UUID '27493bcc-f130-44c8-8e01-cdb87995321b' TO bd6.table2 (periodString,operator_idUInt64,competition_idString,gambling_type_idUInt32,bets_amountDecimal(16, 2),bets_countUInt64,is_deletedUInt8,updated_atDateTime64(3, 'UTC'),ver Int64) AS SELECT DISTINCT b.period AS period, b.operator_id AS operator_id, b.competition_id AS competition_id, b.gambling_type_id AS gambling_type_id, ifNull(b.bets_amount, toDecimal64(0, 2)) AS bets_amount, ifNull(b.bets_count, 0) AS bets_count, b.is_deleted AS is_deleted, b.updated_at AS updated_at, toUnixTimestamp64Milli(b.created_at) AS ver FROM bd6.table5 AS b WHERE (b.competition_id != '') AND (b.event_id = '')",
        "size": null,
        "metadata_only": false
}

I just copied command from query and executing it in clickhouse, table was created.

@Slach
Copy link
Collaborator

Slach commented Feb 11, 2022

@shattl2000 ok.
could you share clickhouse-backup print-config
without sensitive credentials?

@shattl2000
Copy link
Author

Sorry for log reply

clickhouse-backup print-config
general:
  remote_storage: s3
  max_file_size: 1073741824
  disable_progress_bar: false
  backups_to_keep_local: 3
  backups_to_keep_remote: 15
  log_level: debug
  allow_empty_backups: false
  download_concurrency: 2
  upload_concurrency: 2
  restore_schema_on_cluster: test
  upload_by_part: true
  download_by_part: true
clickhouse:
  username: default
  password: ""
  host: localhost
  port: 9000
  disk_mapping: {}
  skip_tables:
  - system.*
  - INFORMATION_SCHEMA.*
  - information_schema.*
  timeout: 5m
  freeze_by_part: false
  secure: false
  skip_verify: false
  sync_replicated_tables: false
  log_sql_queries: false
  config_dir: /etc/clickhouse-server/
  restart_command: systemctl restart clickhouse-server
  ignore_not_exists_error_during_freeze: true
  debug: false
s3:
  access_key: 
  secret_key: 
  bucket: test
  endpoint: url-s3
  region: region
  acl: private
  assume_role_arn: ""
  force_path_style: false
  path: 
  disable_ssl: false
  compression_level: 1
  compression_format: gzip
  sse: ""
  disable_cert_verification: false
  storage_class: STANDARD
  concurrency: 1
  part_size: 104857600
  debug: false
gcs:
  credentials_file: ""
  credentials_json: ""
  bucket: ""
  path: ""
  compression_level: 1
  compression_format: tar
  debug: false
  endpoint: ""
cos:
  url: ""
  timeout: 2m
  secret_id: ""
  secret_key: ""
  path: ""
  compression_format: tar
  compression_level: 1
  debug: false
api:
  listen: localhost:7171
  enable_metrics: true
  enable_pprof: false
  username: ""
  password: ""
  secure: false
  certificate_file: ""
  private_key_file: ""
  create_integration_tables: false
  allow_parallel: false
ftp:
  address: ""
  timeout: 2m
  username: ""
  password: ""
  tls: false
  path: ""
  compression_format: tar
  compression_level: 1
  concurrency: 2
  debug: false
sftp:
  address: ""
  port: 22
  username: ""
  password: ""
  key: ""
  path: ""
  compression_format: tar
  compression_level: 1
  concurrency: 1
  debug: false
azblob:
  endpoint_suffix: core.windows.net
  account_name: ""
  account_key: ""
  sas: ""
  use_managed_identity: false
  container: ""
  path: ""
  compression_level: 1
  compression_format: tar
  sse_key: ""
  buffer_size: 0
  buffer_count: 3

@Slach
Copy link
Collaborator

Slach commented Feb 14, 2022

restore_schema_on_cluster: test

This is the key of last error
could you share the full log of the following command?

LOG_LEVEL=debug clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --schema

need clickhouse sql query which clickhouse-backup try to execute before error

@shattl2000
Copy link
Author

shattl2000 commented Feb 14, 2022

I run command
LOG_LEVEL=debug clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --schema

And got a same error
2022/02/14 11:27:42.242435 debug CREATE DATABASE IF NOT EXISTS bd6
2022/02/14 11:27:42.243775 debug ATTACH MATERIALIZED VIEW bd6.table1 UUID '27493bcc-f130-44c8-8e01-cdb87995321b' TO bd6.table2 (`period` String, `operator_id` UInt64, `competition_id` String, `gambling_type_id` UInt32, `bets_amount` Decimal(16, 2), `bets_count` UInt64, `is_deleted` UInt8, `updated_at` DateTime64(3, 'UTC'), `ver` Int64) AS SELECT DISTINCT b.period AS period, b.operator_id AS operator_id, b.competition_id AS competition_id, b.gambling_type_id AS gambling_type_id, ifNull(b.bets_amount, toDecimal64(0, 2)) AS bets_amount, ifNull(b.bets_count, 0) AS bets_count, b.is_deleted AS is_deleted, b.updated_at AS updated_at, toUnixTimestamp64Milli(b.created_at) AS ver FROM bd6.table5 AS b WHERE ON CLUSTER 'test' (b.competition_id != '') AND (b.event_id = '')
2022/02/14 11:27:42.245461 warn can't create table 'bd6.table1': code: 62, message: Syntax error: failed at position 710 ('CLUSTER'): CLUSTER 'test' (b.competition_id != '') AND (b.event_id = ''). Expected one of: UNION, LIMIT, WINDOW, LIKE, GLOBAL NOT IN, end of query, HAVING, AS, DIV, IS, GROUP BY, INTO OUTFILE, OR, QuestionMark, BETWEEN, OFFSET, NOT LIKE, MOD, AND, Comma, alias, ORDER BY, SETTINGS, IN, ILIKE, FORMAT, Dot, NOT ILIKE, WITH, NOT, Arrow, token, NOT IN, GLOBAL IN, will try again backup=clickhouse_backup_2022-02-10T15-21-12 operation=restore

@Slach Slach changed the title When restoring data from a backup, only databases without tables are restored ATTACH MATERIALIZED VIEW + ON CLUSTER doesn't work as expected Feb 14, 2022
@Slach Slach self-assigned this Feb 14, 2022
@Slach Slach added this to the 1.3.1 milestone Feb 14, 2022
@Slach
Copy link
Collaborator

Slach commented Feb 15, 2022

Are you sure file which you share here #388 (comment) is correct?

I don't see spaces between field names and field types in shared cat *.json result

Did you try manual change metadata/bd5/table1.json file on backup folder?

I look into codebase, and provided query should convert to ON CLUSTER clause correctly

@shattl2000
Copy link
Author

It's must be correct. I didn't manual change file.

cat /var/lib/clickhouse/backup/clickhouse_backup_2022-02-10T15-21-12/metadata/bd6/table1.json
{
	"table": "table1",
	"database": "bd6",
	"parts": {},
	"query": "ATTACH MATERIALIZED VIEW bd6.table1 UUID '27493bcc-f130-44c8-8e01-cdb87995321b' TO bd6.table2 (`period` String, `operator_id` UInt64, `competition_id` String, `gambling_type_id` UInt32, `bets_amount` Decimal(16, 2), `bets_count` UInt64, `is_deleted` UInt8, `updated_at` DateTime64(3, 'UTC'), `ver` Int64) AS SELECT DISTINCT b.period AS period, b.operator_id AS operator_id, b.competition_id AS competition_id, b.gambling_type_id AS gambling_type_id, ifNull(b.bets_amount, toDecimal64(0, 2)) AS bets_amount, ifNull(b.bets_count, 0) AS bets_count, b.is_deleted AS is_deleted, b.updated_at AS updated_at, toUnixTimestamp64Milli(b.created_at) AS ver FROM bd6.table5 AS b WHERE (b.competition_id != '') AND (b.event_id = '')",
	"size": null,
	"metadata_only": false
}

@Slach
Copy link
Collaborator

Slach commented Feb 15, 2022

@shattl2000 could you run the following debug build?
Please unpack it on /tmp/ folder
clickhouse-backup.zip

and run
LOG_LEVEL=debug /tmp/clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --schema

@shattl2000
Copy link
Author

The same error

2022/02/15 19:18:21.450556 debug CREATE DATABASE IF NOT EXISTS bd6
2022/02/15 19:18:21.451940 debug ATTACH MATERIALIZED VIEW bd6.table1 UUID '27493bcc-f130-44c8-8e01-cdb87995321b' TO bd6.table2 (`period` String, `operator_id` UInt64, `competition_id` String, `gambling_type_id` UInt32, `bets_amount` Decimal(16, 2), `bets_count` UInt64, `is_deleted` UInt8, `updated_at` DateTime64(3, 'UTC'), `ver` Int64) AS SELECT DISTINCT b.period AS period, b.operator_id AS operator_id, b.competition_id AS competition_id, b.gambling_type_id AS gambling_type_id, ifNull(b.bets_amount, toDecimal64(0, 2)) AS bets_amount, ifNull(b.bets_count, 0) AS bets_count, b.is_deleted AS is_deleted, b.updated_at AS updated_at, toUnixTimestamp64Milli(b.created_at) AS ver FROM bd6.table5 AS b WHERE ON CLUSTER 'test' (b.competition_id != '') AND (b.event_id = '')
2022/02/15 19:18:21.453595 warn can't create table 'bd6.table1': code: 62, message: Syntax error: failed at position 710 ('CLUSTER'): CLUSTER 'test' (b.competition_id != '') AND (b.event_id = ''). Expected one of: UNION, LIMIT, WINDOW, LIKE, GLOBAL NOT IN, end of query, HAVING, AS, DIV, IS, GROUP BY, INTO OUTFILE, OR, QuestionMark, BETWEEN, OFFSET, NOT LIKE, MOD, AND, Comma, alias, ORDER BY, SETTINGS, IN, ILIKE, FORMAT, Dot, NOT ILIKE, WITH, NOT, Arrow, token, NOT IN, GLOBAL IN, will try again backup=clickhouse_backup_2022-02-10T15-21-12 operation=restore

@Slach
Copy link
Collaborator

Slach commented Feb 16, 2022

clickhouse-backup.zip
reproduced, thanks a lot for reporting
check latest debug build, should work as expected

@shattl2000
Copy link
Author

Thanks, it's look like all right. Schema restored without error. Could you tell me, how right restore data?
The first time I run command
clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --schema
And after execute, I run tis command
clickhouse-backup restore clickhouse_backup_2022-02-10T15-21-12 --data
Or can I somehow restore both the schema and the data in it with one command?

@Slach
Copy link
Collaborator

Slach commented Feb 16, 2022

These two commands are enough
Moreover, you can just skip --schema and --data and use one command clickhouse-backup restore --rm <backup_name> then clickhouse-backup will drop exists table before restore

@shattl2000
Copy link
Author

Thanks a lot! I think this issue can be close.

sushraju added a commit to muxinc/clickhouse-backup that referenced this issue Jun 27, 2022
* fix Altinity#311

* fix Altinity#312

* fix https://github.com/Altinity/clickhouse-backup/runs/4385266807

* fix wrong amd64 `libc` dependency

* change default skip_tables pattern to exclude INFORMATION_SCHEMA database for clickhouse 21.11+

* actualize GET /backup/actions, and fix config.go `CLICKHOUSE_SKIP_TABLES` definition

* add COS_DEBUG separate setting, wait details in Altinity#316

* try to resolve Altinity#317

* Allow using OIDC token for AWS credentials

* update ReadMe.md add notes about INFORMATION_SCHEMA.*

* fix Altinity#220, allow total_bytes as uint64 fetching
fix allocations for `tableMetadataForDownload`
fix getTableSizeFromParts behavior only for required tables
fix Error handling on some suggested cases

* fix Altinity#331, corner case when `Table`  and `Database` have the same name.
update clickhouse-go to 1.5.1

* fix Altinity#331

* add SFTP_DEBUG to try debug Altinity#335

* fix bug, recursuve=>recursive

* BackUPList use 'recursive=true', and other codes do not change, hope this can pass CI

* Force recursive equals true locally

* Reset recursive flag to false

* fix Altinity#111

* add inner Interface for COS

* properly fix for recursive delimiter, fix Altinity#338

* Fix bug about metadata.json, we should check the file name first, instead of appending metadata.json arbitrary

* add ability to restore schema ON CLUSTER, fix Altinity#145

* fix bug about clickhouse-backup list remote which shows no backups info, clickhouse-backup create_remote which will not delete the right backups

* fix `Address: NULL pointer` when DROP TABLE ... ON CLUSTER, fix Altinity#145

* try to fix `TestServerAPI` https://github.com/Altinity/clickhouse-backup/runs/4727526265

* try to fix `TestServerAPI` https://github.com/Altinity/clickhouse-backup/runs/4727754542

* Give up using metaDataFilePath variable

* fix bug

* Add support encrypted disk (include s3 encrypted disks), fix [Altinity#260](Altinity#260)
add 21.12 to test matrix
fix FTP MkDirAll behavior
fix `restore --rm` behavior for 20.12+ for tables which have dependent objects (like dictionary)

* try to fix failed build https://github.com/Altinity/clickhouse-backup/runs/4749276032

* add S3 only disks check for 21.8+

* fix Altinity#304

* fix Altinity#309

* try return GCP_TESTS back

* fix run GCP_TESTS

* fix run GCP_TESTS, again

* split build-artifacts and build-test-artifacts

* try to fix https://github.com/Altinity/clickhouse-backup/runs/4757549891

* debug workflows/build.yaml

* debug workflows/build.yaml

* debug workflows/build.yaml

* final download atrifacts for workflows/build.yaml

* fix build docker https://github.com/Altinity/clickhouse-backup/runs/4758167628

* fix integration_tests https://github.com/AlexAkulov/clickhouse-backup/runs/4758357087

* Improve list remote speed via local metadata cache, fix Altinity#318

* try to fix https://github.com/Altinity/clickhouse-backup/runs/4763790332

* fix test after fail https://github.com/Altinity/clickhouse-backup/runs/4764141333

* fix concurrency MkDirAll for FTP remote storage, improve `invalid compression_format` error message

* fix TestLongListRemote

* Clean code, do not name variables so sloppily, names should be meaningful

* Update clickhouse.go

Change partitions => part

* Not change Files filed in json file

* Code should be placed in proper position

* Update server.go

* fix bug

* Invoke SoftSelect should begin with ch.

* fix error, clickhouse.common.TablePathEncode => common.TablePathEncode

* refine code

* try to commit

* fix bug

* Remove unused codes

* Use NewReplacer

* Add `CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE`, fix Altinity#319

* fix test fail https://github.com/Altinity/clickhouse-backup/runs/4825973411?check_suite_focus=true

* run only TestSkipNotExistsTable on Github actions

* try to fix TestSkipNotExistsTable

* try to fix TestSkipNotExistsTable

* try to fix TestSkipNotExistsTable, for ClickHouse version v1.x

* try to fix TestSkipNotExistsTable, for ClickHouse version v1.x

* add microseconds to log, try to fix TestSkipNotExistsTable, for ClickHouse version v20.8

* add microseconds to log, try to fix TestSkipNotExistsTable, for ClickHouse version v20.8

* fix connectWithWait, some versions of clickhouse accept connections during process /entrypoint-initdb.d, need wait to continue

* add TestProjections

* rename dropAllDatabases to more mental and clear name

* skip TestSkipNotExistsTable

* Support specified partition backup (Altinity#356)

* Support specify partition during backup create

Authored-by: wangzhen <wangzhen@growingio.com>

* fix PROJECTION restore Altinity#320

* fix TestProjection fail after https://github.com/Altinity/clickhouse-backup/actions/runs/1712868840

* switch to `altinity-qa-test` bucket in GCS test

* update github.com/mholt/archiver/v3 and github.com/ClickHouse/clickhouse-go to latest version, remove old github.com/mholt/archiver usage

* fix `How to convert MergeTree to ReplicatedMergeTree` instruction

* fix `FTP` connection usage in MkDirAll

* optimize ftp.go connection pool

* Add `UPLOAD_BY_PART` config settings for improve upload/download concurrency fix Altinity#324

* try debug https://github.com/AlexAkulov/clickhouse-backup/runs/4920777422

* try debug https://github.com/AlexAkulov/clickhouse-backup/runs/4920777422

* fix vsFTPd 500 OOPS: vsf_sysutil_bind, maximum number of attempts to find a listening port exceeded, fix https://github.com/AlexAkulov/clickhouse-backup/runs/4921182982

* try to fix race condition in GCP https://github.com/AlexAkulov/clickhouse-backup/runs/4924432841

* update clickhouse-go to 1.5.3, properly handle `--schema` parameter for show local backup size after `download`

* add `Database not exists` corner case for `IgnoreNotExistsErrorDuringFreeze` option

* prepare release 1.3.0
- Add implementation `--diff-from-remote` for `upload` command and properly handle `required` on download command, fix Altinity#289
- properly `REMOTE_STORAGE=none` error handle, fix Altinity#375
- Add support for `--partitions` on create, upload, download, restore CLI commands and API endpoint fix Altinity#378, properly implementation of Altinity#356
- Add `print-config` cli command fix Altinity#366
- API Server optimization for speed of `last_backup_size_remote` metric calculation to make it async during REST API startup and after download/upload, fix Altinity#309
- Improve `list remote` speed via local metadata cache in `$TEMP/.clickhouse-backup.$REMOTE_STORAGE`, fix Altinity#318
- fix Altinity#375, properly `REMOTE_STORAGE=none` error handle
- fix Altinity#379, will try to clean `shadow` if `create` fail during `moveShadow`
- more precise calculation backup size during `upload`, for backups created with `--partitions`, fix bug after Altinity#356
- fix `restore --rm` behavior for 20.12+ for tables which have dependent objects (like dictionary)
- fix concurrency by `FTP` creation directories during upload, reduce connection pool usage
- properly handle `--schema` parameter for show local backup size after `download`
- add ClickHouse 22.1 instead of 21.12 to test matrix

* fix build https://github.com/Altinity/clickhouse-backup/runs/5033550335

* Add `API_ALLOW_PARALLEL` to support multiple parallel execution calls for, WARNING, control command names don't try to execute multiple same commands and be careful, it could allocate much memory during upload / download, fix Altinity#332

* apt-get update too slow today on github ;(

* fix TestLongListRemote

* fix Altinity#340, properly handle errors on S3 during Walk() and delete old backup

* Add TestFlows tests to GH workflow (Altinity#5)

* add actions tests

* Update test.yaml

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* updated

* added config.yml

* added config.yml

* update

* updated files

* added tests for views

* updated tests

* updated

* fixed snapshots

* updated tests in response to @Slach

* upload new stuff

* rerun

* fix

* fix

* remove file

* added requirements

* fix fails

* ReRun actions

* Moved credentials

* added secrets

* ReRun actions

* Edited test.yaml

* Edited test.yaml

* ReRun actions

* removed TE flag

* update

* update

* update

* fix type

* update

* try to reanimate ugly github actions and ugly python tests

* try to reanimate ugly config_rbac.py

* fix Altinity#300
fix WINDOW VIEW restore
fix restore for different compression_format than backup created
fix most of xfail in regression.py
merge test.yaml and build.yaml in github actions
Try to add experimental support for backup `MaterializedMySQL` and `MaterializedPostgeSQL` tables, restore MySQL tables not impossible now without replace `table_name.json` to `Engine=MergeTree`, PostgreSQL not supported now, see ClickHouse/ClickHouse#32902

* return format back

* fix build.yaml after https://github.com/Altinity/clickhouse-backup/actions/runs/1800312966

* fix build.yaml after https://github.com/Altinity/clickhouse-backup/actions/runs/1800312966

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079597138

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079630559

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079669062

* fix tfs report

* fix upload artifact for tfs report

* fix upload artifact for clickhouse logs, remove unused BackupOptions

* suuka

* fix upload `clickhouse-logs` artifacts and tfs `report.html`

* fix upload `clickhouse-logs` artifacts

* fix upload `clickhouse-logs` artifacts, fix tfs reports

* fix tfs reports

* change retention to allow upload-artifacts work

* fix ChangeLog.md

* skip gcs and aws remote storage tests if secrets not set

* remove short output

* increase timeout to allow download images during pull

* remove upload `tesflows-clickhouse-logs` artifacts to avoid 500 error

* fix upload_release_assets action for properly support arm64

* switch to mantainable `softprops/action-gh-release`

* fix Unexpected input(s) 'release_name'

* move internal, config, util into `pkg` refactoring

* updated test requirements

* refactoring `filesystemhelper.Chown` remove unnecessary getter/setter, try to reproduce access denied for Altinity#388 (comment)

* resolve Altinity#390, for 1.2.3 hotfix branch

* backport 1.3.x Dockerfile and Makefile to allow 1.2.3 docker ARM support

* fix Altinity#387 (comment), improve documentation related to memory and CPU usage

* fix Altinity#388, improve restore ON CLUSTER for VIEW with TO clause

* fix Altinity#388, improve restore ATTACH ... VIEW ... ON CLUSTER, GCS golang sdk updated to latest

* fix Altinity#385, properly handle multiple incremental backup sequences + `BACKUPS_TO_KEEP_REMOTE`

* fix Altinity#392, correct download for recursive sequence of diff backups when `DOWNLOAD_BY_PART` true
fix integration_test.go, add RUN_ADVANCED_TESTS environment, fix minio_nodelete.sh

* try to reduce upload artifact jobs, look actions/upload-artifact#171 and https://github.com/Altinity/clickhouse-backup/runs/5229552384?check_suite_focus=true

* try to docker-compose up from first time https://github.com/AlexAkulov/clickhouse-backup/runs/5231510719?check_suite_focus=true

* disable telemetry for GCS related to googleapis/google-cloud-go#5664

* update aws-sdk-go and GCS storage SDK

* DROP DATABASE didn't clean S3 files, DROP TABLE clean!

* - fix Altinity#406, properly handle `path` for S3, GCS for case when it begin from "/"

* fix getTablesWithSkip

* fix Altinity#409

* cherry pick release.yaml from 1.3.x to 1.2.x

* fix Altinity#409, for 1.3.x avoid delete partially uploaded backups via `backups_keep_remote` option

* Updated requirements file

* fix Altinity#409, for 1.3.x avoid delete partially uploaded backups via `backups_keep_remote` option

* fix testflows test

* fix testflows test

* restore tests after update minio

* Fix incorrect in progress check on the example of Kubernetes CronJob

* removeOldBackup error log from fatal to warning, to avoid race-condition deletion during multi-shard backup

* switch to golang 1.18

Signed-off-by: Slach <bloodjazman@gmail.com>

* add 22.3 to test matrix, fix Altinity#422, avoid cache broken (partially uploaded) remote backup metadata.

* add 22.3 to test matrix

* fix Altinity#404, switch to 22.3 by default

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix Altinity#404, update to archiver/v4, properly support context during upload / download and correct error handler, reduce `SELECT * system.disks` calls

Signed-off-by: Slach <bloodjazman@gmail.com>

* cleanup ChangeLog.md, finally before 1.3.2 release

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404, properly calculate max_parts_count

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404, properly calculate max_parts_count

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* Updated Testflows README.md

* add `S3_ALLOW_MULTIPART_DOWNLOAD` to config, to improve download speed, fix Altinity#431

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix snapshot after change default config

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix testflows healthcheck for slow internet connection during `clickhouse_backup` start

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix snapshot after change defaultConfig

Signed-off-by: Slach <bloodjazman@gmail.com>

* - add support backup/restore user defined functions https://clickhouse.com/docs/en/sql-reference/statements/create/function, fix Altinity#420

Signed-off-by: Slach <bloodjazman@gmail.com>

* Updated README.md in testflows tests

* remove unnecessary SQL query for calculateMaxSize, refactoring test to allow restoreRBAC with restart on 21.8 (strange bug, clickhouse stuck after try to run too much distributed DDL queries from ZK), update LastBackupSize metric during API call /list/remote, add healthcheck to docker-compose in integration tests

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to fix GitHub actions

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to fix GitHub actions, WTF, why testflows failed?

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `clickhouse_backup_number_backups_remote`, `clickhouse_backup_number_backups_local`, `clickhouse_backup_number_backups_remote_expected`,`clickhouse_backup_number_backups_local_expected` prometheus metric, fix Altinity#437

Signed-off-by: Slach <bloodjazman@gmail.com>

* add ability to apply `system.macros` values to `path` field in all types of `remote_storage`, fix Altinity#438

Signed-off-by: Slach <bloodjazman@gmail.com>

* use all disks for upload and download for mutli-disk volumes in parallel when `upload_by_part: true` fix Altinity#400

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix wrong warning for .gz, .bz2, .br archive extensions during download, fix Altinity#441

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix Altinity#441, again ;(

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to improve strange parts long tail during test

Signed-off-by: Slach <bloodjazman@gmail.com>

* update actions/download-artifact@v3 and actions/upload-artifact@v2, after actions fail

Signed-off-by: Slach <bloodjazman@gmail.com>

* downgrade actions/upload-artifact@v2.2.4, actions/upload-artifact#270, after actions fail https://github.com/AlexAkulov/clickhouse-backup/runs/6481819375

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix upload data go routines wait, expect improve upload speed the same as 1.3.2

Signed-off-by: Slach <bloodjazman@gmail.com>

* prepare 1.4.1

Signed-off-by: Slach <bloodjazman@gmail.com>

* Fix typo in Example.md

* Set default value for max_parts_count in Azure config

* fix `--partitions` parameter parsing, fix Altinity#425

Signed-off-by: Slach <bloodjazman@gmail.com>

* remove unnecessary logs, fix release.yaml to mark properly tag in GitHub release

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `API_INTEGRATION_TABLES_HOST` option to allow use DNS name in integration tables system.backup_list, system.backup_actions

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `API_INTEGRATION_TABLES_HOST` fix for tesflows fails

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix `upload_by_part: false` max file size calculation, fix Altinity#454

* upgrade actions/upload-artifact@v3, actions/upload-artifact#270, after actions fail https://github.com/Altinity/clickhouse-backup/runs/6962550621

* [clickhouse-backup] fixes on top of upstream

* upstream versions

Co-authored-by: Slach <bloodjazman@gmail.com>
Co-authored-by: Vilmos Nebehaj <vilmos@sprig.com>
Co-authored-by: Eugene Klimov <eklimov@altinity.com>
Co-authored-by: root <root@SLACH-MINI.localdomain>
Co-authored-by: wangzhen <wangzhen@growingio.com>
Co-authored-by: W <wangzhenaaa7@gmail.com>
Co-authored-by: Andrey Zvonov <32552679+zvonand@users.noreply.github.com>
Co-authored-by: zvonand <azvonov@altinity.com>
Co-authored-by: benbiti <wangshouben@hotmail.com>
Co-authored-by: Vitaliis <vsviderskyi@altinity.com>
Co-authored-by: Toan Nguyen <hgiasac@gmail.com>
Co-authored-by: Guido Iaquinti <guido@posthog.com>
Co-authored-by: ricoberger <mail@ricoberger.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants