IMPROVEMENTS
- first implementation for properly backup S3/GCS/Azure disks, support server-side copy to back up bucket during
clickhouse-backup
create and duringclickhouse-backup restore
, requires addobject_disk_path
tos3
,gcs
,azblob
section, fix 447 - Implementation blacklist for table engines during backup / download / upload / restore 537
- restore RBAC / configs, refactoring restart clickhouse-server via
sql:SYSTEM SHUTDOWN
orexec:systemctl restart clickhouse-server
, add--rbac-only
and--configs-only
options tocreate
,upload
,download
,restore
command. fix [706]Altinity#706
BUG FIXES
- fix possible create backup failures during UNFREEZE not exists tables, affected 2.2.7+ version, fix 704
BUG FIXES
- fix error when
backups_to_keep_local: -1
, fix 698 - minimal value for
download_concurrency
andupload_concurrency
1, fix 688 - do not create UDF when use --data flag, fix 697
IMPROVEMENTS
- add support
use_environment_credentials
option insideclickhouse-server
backup object disk definition, fix 691 - add but skip tests for
azure_blob_storage
backup disk foruse_embbeded_backup_restore: true
, it works, but slow, look ClickHouse/ClickHouse#52088 for details
BUG FIXES
- fix static build for FIPS compatible mode fix 693
- complete success/failure server callback notification even when main context canceled, fix 680
clean
command will not return error when shadow directory not exists, fix 686
IMPROVEMENTS
- add FIPS compatible builds and examples, fix 656, fix 674
- improve support for
use_embedded_backup_restore: true
, applied ugly workaround in test to avoid ClickHouse/ClickHouse#43971, and applied restore workaround to resolve ClickHouse/ClickHouse#42709 - migrate to
clickhouse-go/v2
, fix 540, close 562 - add documentation for
AWS_ARN_ROLE
andAWS_WEB_IDENTITY_TOKEN_FILE
, fix 563
BUG FIXES
- hotfix wrong empty files when disk_mapping contains not exists during create, affected 2.2.7 version, look details 676
- add
FTP_ADDRESS
andSFTP_PORT
in Default config Readme.md section fix 668 - when use
--tables=db.materialized_view
pattern, then create/restore backup also for.inner.materialized_view
or.inner_id.uuid
, fix 613
BUG FIXES
- hotfix wrong empty files when disk_mapping contains not exists during create, affected 2.2.7 version, look details 676
IMPROVEMENTS
- Auto-tuning concurrency and buffer size related parameters depending on remote storage type, fix 658
- add
CLICKHOUSE_BACKUP_MUTATIONS
andCLICKHOUSE_RESTORE_AS_ATTACH
config options to allow backup and properly restore table with system.mutations is_done=0 status. fix 529 - add
CLICKHOUSE_CHECK_PARTS_COLUMNS
config option and--skip-check-parts-column
CLI parameter towatch
,create
andcreate_remote
commands to disallow backup with inconsistent column data types fix 529 - add test coverage reports for unit, testflows and integration tests, fix 644
- use UNFREEZE TABLE in ClickHouse after backup finished to allow s3 and other object storage disks unlock and delete remote keys during merge, fix 423
BUG FIXES
- apply
SETTINGS check_table_dependencies=0
toDROP DATABASE
statement, when pass--ignore-dependencies
together with--rm
inrestore
command, fix 651 - add support for masked secrets for ClickHouse 23.3+, fix 640
BUG FIXES
- fix panic for resume upload after restart API server for boolean parameters, fix 653
- apply SETTINGS check_table_dependencies=0 to DROP DATABASE statement, when pass
--ignore-dependencies
together with--rm
inrestore
command, fix 651
BUG FIXES
- fix error after restart API server for boolean parameters, fix 646
- fix corner cases when
restore_schema_on_cluster: cluster
, fix 642, error happens on 2.2.0-2.2.4 - fix
Makefile
targetsbuild-docker
andbuild-race-docker
for old clickhouse-server version - fix typo
retries_pause
config definition in general section
BUG FIXES
- fix wrong deletion on S3 for versioned buckets, use s3.HeadObject instead of s3.GetObjectAttributes, fix 643
BUG FIXES
- fix wrong parameters parsing from *.state file for resumable upload \ download after restart, fix 641
IMPROVEMENTS
- add
callback
parameter to upload, download, create, restore API endpoints, fix 636
BUG FIXES
- add system.macros could be applied to
path
config section to ReadMe.md, fix 638 - fix connection leaks for S3 versioned buckets during execution upload and delete command, fix 637
IMPROVEMENTS
- add additional server-side encryption parameters to s3 config section, fix 619
restore_remote
will not return error when backup already exists in local storage during download check, fix 625
BUG FIXES
- fix error after restart API server when .state file present in backup folder, fix 623
- fix upload / download files from projections multiple times, cause backup create wrong create *.proj as separate data part, fix 622
IMPROVEMENTS
- switch to go 1.20
- after API server startup, if
/var/lib/clickhouse/backup/*/(upload|download).state
present, then operation will continue in background, fix 608 - make
use_resumable_state: true
behavior forupload
anddownload
, fix 608 - improved behavior
--partitions
parameter, for cases when PARTITION BY clause return hashed value instead of numeric prefix forpartition_id
insystem.parts
, fix 602 - apply
system.macros
values when userestore_schema_on_cluster
and replace cluster name in engine=Distributed tables, fix 574 - switch S3 storage backend to https://github.com/aws/aws-sdk-go-v2/, fix 534
- added
S3_OBJECT_LABLES
andGCS_OBJECT_LABELS
to allow setup each backup object metadata during upload fix 588 - added
clickhouse-keeper
as zookeeper replacement for integration test during reproduce 416 - decrease memory buffers for S3 and GCS, change default value for
upload_concurrency
anddownload_concurrency
toround(sqrt(MAX_CPU / 2))
, fix 539 - added ability to set up custom storage class for GCS and S3 depends on backupName pattern, fix 584
BUG FIXES
- fix ssh connection leak for SFTP remote storage, fix 578
- fix wrong Content-Type header, fix 605
- fix wrong behavior for
download
with--partitions
fix 606 - fix wrong size of backup in list command if upload or download was break and resume, fix 526
- fix
_successful_
and_failed_
metrics counter issue, happens after 2.1.0, fix 589 - fix wrong calculation date of last remote backup during startup
- fix wrong duration, status for metrics after 2.1.0 refactoring, fix 599
- fix panic on LIVE VIEW tables with option --restore-database-mapping db:db_new enabled), thanks @php53unit
IMPROVEMENTS
- during upload sort tables descending by
total_bytes
if this field present - improve ReadMe.md add description for all CLI commands and parameters
- add
use_resumable_state
to config to allow default resumable behavior increate_remote
,upload
,restore_remote
anddownload
commands, fix 576
BUG FIXES
- fix
--watch-backup-name-template
command line parsing, overridden after config reload, fix 548 - fix wrong regexp, when
restore_schema_on_cluster: cluster_name
, fix 552 - fix wrong
clean
command and API behavior, fix 533 - fix getMacro usage in Examples for backup / restore sharded cluster.
- fix deletion files from S3 versioned bucket, fix 555
- fix
--restore-database-mapping
behavior forReplicatedMergeTree
(replace database name in replication path) andDistributed
(replace database name in underlying table) tables, fix 547 MaterializedPostgreSQL
doesn't support FREEZE, fix 550, see also ClickHouse/ClickHouse#32902, ClickHouse/ClickHouse#44252create
andrestore
commands will respectskip_tables
config options and--table
cli parameter, to avoid create unnecessary empty databases, fix 583- fix
watch
unexpected connection closed behavior, fix 568 - fix
watch
validation parameters corner cases, close 569 - fix
--restore-database-mapping
behavior forATTACH MATERIALIZED VIEW
,CREATE VIEW
andrestore --data
corner cases, fix 559
IMPROVEMENTS
- add
watch
description to Examples.md
BUG FIXES
- fix panic when use
--restore-database-mapping=db1:db2
, fix 545 - fix panic when use
--partitions=XXX
, fix 544
BUG FIXES
- return bash and clickhouse usergroup to Dockerfile image short, fix 542
IMPROVEMENTS
- complex refactoring to use contexts, AWS and SFTP storage not full supported
- complex refactoring for logging to avoid race condition when change log level during config reload
- improve kubernetes example for adjust incremental backup, fix 523
- add storage independent retries policy, fix 397
- add
clickhouse-backup-full
docker image with integratedkopia
,rsync
,restic
andclickhouse-local
, fix 507 - implement
GET /backup/kill?command=XXX
API to allow kill, fix 516 - implement
kill "full command"
inPOST /backup/actions
handler, fix 516 - implement
watch
inPOST /backup/actions
handler API and CLI command, fix 430 - implement
clickhouse-backup server --watch
to allow server start watch after start, fix 430 - update metric
last_{create|create_remote|upload}_finish
metrics values during API server startup, fix 515 - implement
clean_remote_broken
command andPOST /backup/clean/remote_broken
API request, fix 520 - add metric
number_backups_remote_broken
to calculate broken remote backups, fix 530
BUG FIXES
- fix
keep_backups_remote
behavior for recursive incremental sequences, fix 525 - for
restore
command callDROP DATABASE IF EXISTS db SYNC
when pass--schema
and--drop
together, fix 514 - close persistent connections for remote backup storage after command execution, fix 535
- lot of typos fixes
- fix all commands was always return 200 status (expect errors) and ignore status which passed from application code in API server
IMPROVEMENTS
- implements
remote_storage: custom
, which allow us to adopt any external backup system likerestic
,kopia
,rsync
, rclone etc. fix 383 - add example workflow how to make backup / restore on sharded cluster, fix 469
- add
use_embedded_backup_restore
to allowBACKUP
andRESTORE
SQL commands usage, fix 323, need 22.7+ and resolve ClickHouse/ClickHouse#39416 - add
timeout
toazure
configAZBLOB_TIMEOUT
to allow download with bad network quality, fix 467 - switch to go 1.19
- refactoring to remove legacy
storage
package - add
table
parameter totables
cli command and/backup/tables
API handler, fix 367 - add
--resumable
parameter tocreate_remote
,upload
,restore_remote
,donwload
commands to allow resume upload or download after break. Ignored forremote_storage: custom
, fix 207 - add
--ignore-dependencies
parameter torestore
andrestore_remote
, to allow drop object during restore schema on server where schema objects already exists and contains dependencies which not present in backup, fix 455 - add
restore --restore-database-mapping=<originDB>:<targetDB>[,<...>]
, fix 269, thanks @mojerro
BUG FIXES
- fix wrong upload / download behavior for
compression_format: none
andremote_storage: ftp
IMPROVEMENTS
- add Azure to every CI/CD run, testing with
Azurite
BUG FIXES
- fix azblob.Walk with recursive=True, for properly delete remote backups
BUG FIXES
- fix system.macros detect query
IMPROVEMENTS
- add
storage_class
(GCS_STORAGE_CLASS) support forremote_storage: gcs
fix 502 - upgrade aws golang sdk and gcp golang sdk to latest versions
IMPROVEMENTS
- switch to go 1.19
- refactoring to remove legacy
storage
package
BUG FIXES
- properly execute
CREATE DATABASE IF NOT EXISTS ... ON CLUSTER
when setuprestore_schema_on_cluster
, fix 486
IMPROVEMENTS
- try to improve implementation
check_replicas_before_attach
configuration to avoid concurrent ATTACH PART execution duringrestore
command on multi-shard cluster, fix 474 - add
timeout
toazure
configAZBLOB_TIMEOUT
to allow download with bad network quality, fix 467
BUG FIXES
- fix
download
behavior for parts which contains special characters in name, fix 462
IMPROVEMENTS
- add
check_replicas_before_attach
configuration to avoid concurrent ATTACH PART execution duringrestore
command on multi-shard cluster, fix 474 - allow backup list when clickhouse server offline, fix 476
- add
use_custom_storage_class
(S3_USE_CUSTOM_STORAGE_CLASS
) option tos3
section, thanks @realwhite
BUG FIXES
- resolve
{uuid}
marcos during restore forReplicatedMergeTree
table and ClickHouse server 22.5+, fix 466
IMPROVEMENTS
- PROPERLY restore to default disk if disks not found on destination clickhouse server, fix 457
BUG FIXES
- fix infinite loop
error can't acquire semaphore during Download: context canceled
, anderror can't acquire semaphore during Upload: context canceled
all 1.4.x users recommends upgrade to 1.4.6
IMPROVEMENTS
- add
CLICKHOUSE_FREEZE_BY_PART_WHERE
option which allow freeze by part with WHERE condition, thanks @vahid-sohrabloo
IMPROVEMENTS
- download and restore to default disk if disks not found on destination clickhouse server, fix 457
IMPROVEMENTS
- add
API_INTEGRATION_TABLES_HOST
option to allow use DNS name in integration tables system.backup_list, system.backup_actions
BUG FIXES
- fix
upload_by_part: false
max file size calculation, fix 454
BUG FIXES
- fix
--partitions
parameter parsing, fix 425
BUG FIXES
- fix upload data go routines waiting, expect the same upload speed as 1.3.2
IMPROVEMENTS
- add
S3_ALLOW_MULTIPART_DOWNLOAD
to config, to improve download speed, fix 431 - add support backup/restore user defined functions, fix 420
- add
clickhouse_backup_number_backups_remote
,clickhouse_backup_number_backups_local
,clickhouse_backup_number_backups_remote_expected
,clickhouse_backup_number_backups_local_expected
prometheus metric, fix 437 - add ability to apply
system.macros
values topath
field in all types ofremote_storage
, fix 438 - use all disks for upload and download for multi-disk volumes in parallel when
upload_by_part: true
fix #400
BUG FIXES
- fix wrong warning for .gz, .bz2, .br archive extensions during download, fix 441
IMPROVEMENTS
- add TLS certificates and TLS CA support for clickhouse connections, fix 410
- switch to go 1.18
- add clickhouse version 22.3 to integration tests
- add
S3_MAX_PARTS_COUNT
andAZBLOB_MAX_PARTS_COUNT
for properly calculate buffer sizes during upload and download for custom S3 implementation like Swift - add multithreading GZIP implementation
BUG FIXES
- fix 406, properly handle
path
for S3, GCS for case when it begins from "/" - fix 409, avoid delete partially uploaded backups via
backups_keep_remote
option - fix 422, avoid cache broken (partially uploaded) remote backup metadata.
- fix 404, properly calculate S3_PART_SIZE to avoid freeze after 10000 multi parts uploading, properly handle error when upload and download go-routine failed to avoid pipe stuck
IMPROVEMENTS
- fix 387, improve documentation related to memory and CPU usage
BUG FIXES
- fix 392, correct download for recursive sequence of diff backups when
DOWNLOAD_BY_PART
true - fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don't contain INFORMATION_SCHEMA pattern
- fix 388, improve restore ON CLUSTER for VIEW with TO clause
- fix 385, properly handle multiple incremental backup sequences +
BACKUPS_TO_KEEP_REMOTE
IMPROVEMENTS
- Add
API_ALLOW_PARALLEL
to support multiple parallel execution calls for, WARNING, control command names don't try to execute multiple same commands and be careful, it could allocate much memory during upload / download, fix #332 - Add support for
--partitions
on create, upload, download, restore CLI commands and API endpoint fix #378 properly implementation of #356 - Add implementation
--diff-from-remote
forupload
command and properly handlerequired
on download command, fix #289 - Add
print-config
cli command fix #366 - Add
UPLOAD_BY_PART
(default: true) option for improve upload/download concurrency fix #324 - Add support ARM platform for Docker images and pre-compiled binary files, fix #312
- KeepRemoteBackups should respect differential backups, fix #111
- Add
SFTP_DEBUG
option, fix #335 - Add ability to restore schema
ON CLUSTER
, fix #145 - Add support encrypted disk (include s3 encrypted disks), fix #260
- API Server optimization for speed of
last_backup_size_remote
metric calculation to make it async during REST API startup and after download/upload, fix #309 - Improve
list remote
speed via local metadata cache in$TEMP/.clickhouse-backup.$REMOTE_STORAGE
, fix #318 - Add
CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE
option, fix #319 - Add support for PROJECTION, fix #320
- Return
clean
cli command and APIPOST /backup/clean
endpoint, fix #379
BUG FIXES
- fix #300, allow GCP properly work with empty
GCP_PATH
value - fix #340, properly handle errors on S3 during Walk() and delete old backup
- fix #331, properly restore tables where have table name with the same name as database name
- fix #311, properly run clickhouse-backup inside docker container via entrypoint
- fix #317, properly upload large files to Azure Blob Storage
- fix #220, properly handle total_bytes for uint64 type
- fix #304, properly handle archive extension during download instead of use config settings
- fix #375, properly
REMOTE_STORAGE=none
error handle - fix #379, will try to clean
shadow
ifcreate
fail duringmoveShadow
- more precise calculation backup size during
upload
, for backups created with--partitions
, fix bug after #356 - fix
restore --rm
behavior for 20.12+ for tables which have dependent objects (like dictionary) - fix concurrency by
FTP
creation directories during upload, reduce connection pool usage - properly handle
--schema
parameter for show local backup size afterdownload
- fix restore bug for WINDOW VIEW, thanks @zvonand
EXPERIMENTAL
- Try to add experimental support for backup
MaterializedMySQL
andMaterializedPostgeSQL
tables, restore MySQL tables not impossible now without replacetable_name.json
toEngine=MergeTree
, PostgresSQL not supported now, see ClickHouse/ClickHouse#32902
HOT FIXES
- fix 409, avoid delete partially uploaded backups via
backups_keep_remote
option
HOT FIXES
- fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don't contain INFORMATION_SCHEMA pattern
IMPROVEMENTS
- Add REST API
POST /backup/tables/all
, fixPOST /backup/tables
to respectCLICKHOUSE_SKIP_TABLES
BUG FIXES
- fix #297, properly restore tables where have fields with the same name as table name
- fix #298, properly create
system.backup_actions
andsystem.backup_list
integration tables for ClickHouse before 21.1 - fix #303, ignore leading and trailing spaces in
skip_tables
and--tables
parameters - fix #292, lock clickhouse connection pool to single connection
IMPROVEMENTS
- Add REST API integration tests
BUG FIXES
- fix #290
- fix #291
- fix
CLICKHOUSE_DEBUG
settings behavior (now we can see debug log from clickhouse-go)
INCOMPATIBLE CHANGES
- REST API
/backup/status
now return only latest executed command with status and error message
IMPROVEMENTS
- Added REST API
/backup/list/local
and/backup/list/remote
to allow list backup types separately - Decreased background backup creation time via REST API
/backup/create
, during avoid list remote backups for update metrics value - Decreased backup creation time, during avoid scan whole
system.tables
when settable
query string parameter or--tables
cli parameter - Added
last
andfilter
query string parameters to REST API/backup/actions
, to avoid pass to client long JSON documents - Improved
FTP
remote storage parallel upload / download - Added
FTP_CONCURRENCY
to allow, by default MAX_CPU / 2 - Added
FTP_DEBUG
setting, to allow debug FTP commands - Added
FTP
to CI/CD on any commit - Added race condition check to CI/CD
BUG FIXES
- environment variable
LOG_LEVEL
now apply toclickhouse-backup server
properly - fix #280, incorrect prometheus metrics measurement for
/backup/create
,/backup/upload
,/backup/download
- fix #273, return
S3_PART_SIZE
back, but calculates it smartly - fix #252, now you can pass
last
andfilter
query string parameters - fix #246, incorrect error messages when use
REMOTE_STORAGE=none
- fix #283, properly handle error message from
FTP
server - fix #268, properly restore legacy backup for schema without database name
BUG FIXES
- fix broken
system.backup_list
integration table after addrequired field
in Altinity#263 - fix #274 invalid
SFTP_PASSWORD
environment usage
IMPROVEMENTS
- Added concurrency settings for upload and download, which allow loading table data in parallel for each table and each disk for multi-disk storages
- Up golang version to 1.17
- Updated go libraries dependencies to actual version (exclude azure)
- Add Clickhouse 21.8 to test matrix
- Now
S3_PART_SIZE
not restrict upload size, partSize calculate depends onMAX_FILE_SIZE
- improve logging for delete operation
- Added
S3_DEBUG
option to allow debug S3 connection - Decrease number of SQL queries to system.* during backup commands
- Added options for RBAC and CONFIG backup, look to
clickhouse-backup help create
andclickhouse-backup help restore
for details - Add
S3_CONCURRENCY
option to speedup backup upload toS3
- Add
SFTP_CONCURRENCY
option to speedup backup upload toSFTP
- Add
AZBLOB_USE_MANAGED_IDENTITY
support for ManagedIdentity for azure remote storage, thanks https://github.com/roman-vynar - Add clickhouse-operator kubernetes manifest which run
clickhouse-backup
inserver
mode on each clickhouse pod in kubernetes cluster - Add detailed description and restrictions for incremental backups.
- Add
GCS_DEBUG
option - Add
CLICKHOUSE_DEBUG
option to allow low-level debug forclickhouse-go
BUG FIXES
- fix #266 properly restore legacy backup format
- fix #244 add
read_timeout
,write_timeout
to client-side timeout forclickhouse-go
- fix #255 restrict connection pooling to 1 in
clickhouse-go
- fix #256 remote_storage: none, was broke compression
- fix #266 legacy backups from version prior 1.0 can't restore without
allow_empty_backup: true
- fix #223 backup only database metadata for proxy integrated database engines like MySQL, PostgresSQL
- fix
GCS
global buffer wrong usage during UPLOAD_CONCURRENCY > 1 - Remove unused
SKIP_SYNC_REPLICA_TIMEOUTS
option
BUG FIXES
- Fixed silent cancel uploading when table has more than 4k files (fix #203, #163. Thanks mastertheknife)
- Fixed download error for
zstd
andbrotli
compression formats - Fixed bug when old-format backups hadn't cleared
IMPROVEMENTS
- Added diff backups
- Added retries to restore operation for resolve complex tables dependencies (Thanks @Slach)
- Added SFTP remote storage (Thanks @combin)
- Now databases will be restored with the same engines (Thanks @Slach)
- Added
create_remote
andrestore_remote
commands - Changed of compression format list. Added
zstd
,brotli
and disabledbzip2
,sz
,xz
BUG FIXES
- Fixed empty backup list when S3_PATH and AZBLOB_PATH is root
- Fixed azblob container issue (Thanks @atykhyy)
IMPROVEMENTS
- Added 'allow_empty_backups' and 'api.create_integration_tables' options
- Wait for clickhouse in server mode (fix #169)
- Added Disk Mapping feature (fix #162)
BUG FIXES
- Fixed 'ftp' remote storage (#164)
It is the last release of v0.x.x
IMPROVEMENTS
- Added 'create_remote' and 'restore_remote' commands
- Changed update config behavior in API mode
BUG FIXES
IMPROVEMENTS
- Support for new versions of ClickHouse (#155)
- Support of Atomic Database Engine (#140, #141, #126)
- Support of multi disk ClickHouse configurations (#51)
- Ability to upload and download specific tables from backup
- Added partitions backup on remote storage (#83)
- Added support for backup/upload/download schema only (#138)
- Added new backup format select it by
compression_format: none
option
BROKEN CHANGES
- Changed backup format
- Incremental backup on remote storage feature is not supported now, but will support in future versions
IMPROVEMENTS
- Added
CLICKHOUSE_AUTO_CLEAN_SHADOW
option for cleaning shadow folder before backup. Enabled by default. - Added
CLICKHOUSE_SYNC_REPLICATED_TABLES
option for sync replicated tables before backup. Enabled by default. - Improved statuses of operations in server mode
BUG FIXES
- Fixed bug with semaphores in server mode