diff --git a/5.0/index.html b/5.0/index.html index 6942b02..f9cca88 100644 --- a/5.0/index.html +++ b/5.0/index.html @@ -855,6 +855,27 @@ +

+ License + + + Python + + + Black + + + Tests + + + Type check + + + Dev build + + + Release build +

Backuper

A tool for performing scheduled database backups and transferring encrypted data to secure public clouds, for home labs, hobby projects, etc., in environments such as k8s, docker, vms.

Backups are in zip format using 7-zip, with strong AES-256 encryption under the hood.

@@ -895,7 +916,7 @@

Architectures

  • linux/arm64
  • Example

    -

    Everyday 5am backup to Google Cloud Storage of PostgreSQL database defined in the same file and running in docker container.

    +

    Everyday 5am backup of PostgreSQL database defined in the same file and running in docker container.

     1
      2
      3
    @@ -926,7 +947,7 @@ 

    Example

    Real world usage

    The author actively uses backuper (with GCS) for one production project plemiona-planer.pl postgres database (both PRD and STG) and for bunch of homelab projects including self hosted Firefly III mariadb, Grafana postgres, KeyCloak postgres, Nextcloud postgres and configuration file, Minecraft server files, and two other postgres dbs for some demo projects.

    See how it looks for ~2GB size database:

    -

    +



    diff --git a/5.0/search/search_index.json b/5.0/search/search_index.json index 5cc13c2..3d0a399 100644 --- a/5.0/search/search_index.json +++ b/5.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Backuper","text":"

    A tool for performing scheduled database backups and transferring encrypted data to secure public clouds, for home labs, hobby projects, etc., in environments such as k8s, docker, vms.

    Backups are in zip format using 7-zip, with strong AES-256 encryption under the hood.

    "},{"location":"#documentation","title":"Documentation","text":"
    • https://backuper.rafsaf.pl
    "},{"location":"#supported-backup-targets","title":"Supported backup targets","text":"
    • PostgreSQL (tested on 15, 14, 13, 12, 11)
    • MySQL (tested on 8.0, 5.7)
    • MariaDB (tested on 10.11, 10.6, 10.5, 10.4)
    • Single file
    • Directory
    "},{"location":"#supported-upload-providers","title":"Supported upload providers","text":"
    • Google Cloud Storage bucket
    • AWS S3 bucket
    • Azure Blob Storage
    • Debug (local)
    "},{"location":"#notifications","title":"Notifications","text":"
    • Discord
    • Email (SMTP)
    • Slack
    "},{"location":"#deployment-strategies","title":"Deployment strategies","text":"

    Using docker image: rafsaf/backuper:latest, see all tags on dockerhub

    • docker (docker compose) container
    • kubernetes deployment
    "},{"location":"#architectures","title":"Architectures","text":"
    • linux/amd64
    • linux/arm64
    "},{"location":"#example","title":"Example","text":"

    Everyday 5am backup to Google Cloud Storage of PostgreSQL database defined in the same file and running in docker container.

    # docker-compose.yml\n\nservices:\n  db:\n    image: postgres:15\n    environment:\n      - POSTGRES_PASSWORD=pwd\n  backuper:\n    image: rafsaf/backuper:latest\n    environment:\n      - POSTGRESQL_PG15=host=db password=pwd cron_rule=0 0 5 * * port=5432\n      - ZIP_ARCHIVE_PASSWORD=change_me\n      - BACKUP_PROVIDER=name=debug\n

    (NOTE this will use provider debug that store backups locally in the container).

    "},{"location":"#real-world-usage","title":"Real world usage","text":"

    The author actively uses backuper (with GCS) for one production project plemiona-planer.pl postgres database (both PRD and STG) and for bunch of homelab projects including self hosted Firefly III mariadb, Grafana postgres, KeyCloak postgres, Nextcloud postgres and configuration file, Minecraft server files, and two other postgres dbs for some demo projects.

    See how it looks for ~2GB size database:

    "},{"location":"configuration/","title":"Configuration","text":"

    Environemt variables

    Name Type Description Default ZIP_ARCHIVE_PASSWORD string[required] Zip archive password that all backups generated by this backuper instance will have. When it is lost, you lose access to your backups. Special characters are allowed since shlex quote is used around app, though not recommended so password can be used when using programs in terminal like unzip. - BACKUP_PROVIDER string[required] See Providers chapter, choosen backup provider for example GCS. - INSTANCE_NAME string Name of this backuper instance, will be used for example when sending fail messages. Defaults to system hostname. system hostname BACKUP_MAX_NUMBER int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days in backup target. Note this global default and can be overwritten by using max_backups param in specific targets. Min 1 and max 998. 7 BACKUP_MIN_RETENTION_DAYS int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Note this global default and can be overwritten by using min_retention_days param in specific targets. Min 0 and max 36600. 3 ROOT_MODE bool If false, process in container will start backuper using user with minimal permissions required. If true, it will run as root (it may help for example with file/directory backup permission issues in mounted volumes). false POSTGRESQL_... backup target syntax PostgreSQL database target, see PostgreSQL. - MYSQL_... backup target syntax MySQL database target, see MySQL. - MARIADB_... backup target syntax MariaDB database target, see MariaDB. - SINGLEFILE_... backup target syntax Single file database target, see Single file. - DIRECTORY_... backup target syntax Directory database target, see Directory. - DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000. 1500 SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000. 1500 SMTP_HOST string SMTP server host. - SMTP_FROM_ADDR string Email address that will send emails. - SMTP_PASSWORD string Password for SMTP_FROM_ADDR. - SMTP_TO_ADDRS string Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com. - SMTP_PORT int SMTP server port. 587 LOG_LEVEL string Case sensitive const log level, must be one of INFO, DEBUG, WARNING, ERROR, CRITICAL. INFO SUBPROCESS_TIMEOUT_SECS int Indicates how long subprocesses can last. Note that all backups are run from shell in subprocesses. Defaults to 3600 seconds which should be enough for even big dbs to make backup of. Min 5 and max 86400 (24h). 3600 ZIP_ARCHIVE_LEVEL int Compression level of 7-zip via -mx option: -mx[N] : set compression level: -mx1 (fastest) ... -mx9 (ultra). Defaults to 3 which should be sufficient and fast enough. Min 1 and max 9. 3 LOG_FOLDER_PATH string Path to store log files, for local development ./logs, in container /var/log/backuper. /var/log/backuper SIGTERM_TIMEOUT_SECS int Time in seconds on exit how long backuper will wait for ongoing backup threads before force killing them and exiting. Min 0 and max 86400 (24h). 30 ZIP_SKIP_INTEGRITY_CHECK bool By default set to false and after 7zip archive is created, integrity check runs on it. You can opt out this behaviour for performance reasons, use true. false BACKUPER_CPU_ARCHITECTURE string CPU architecture, supported amd64 and arm64. Docker container will set it automatically so probably do not change it. amd64

    "},{"location":"deployment/","title":"Deployment","text":"

    In general, use docker image rafsaf/backuper (here available tags on dockerhub), it supports both amd64 and arm64 architectures. Standard way would be to run it with docker compose or as a kubernetes deployment. If not sure, use latest.

    "},{"location":"deployment/#docker-compose","title":"Docker Compose","text":""},{"location":"deployment/#docker-compose-file","title":"Docker compose file","text":"
    # docker-compose.yml\n\nservices:\n  backuper:\n    container_name: backuper\n    image: rafsaf/backuper:latest\n    environment:\n      - POSTGRESQL_DB1=...\n      - MYSQL_DB2=...\n      - MARIADB_DB3=...\n\n      - ZIP_ARCHIVE_PASSWORD=change_me\n      - BACKUP_PROVIDER=name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\n
    "},{"location":"deployment/#notes","title":"Notes","text":"
    • For hard debug you can set LOG_LEVEL=DEBUG and use (container name is backuper):
      docker logs backuper\n
    • There is runtime flag --single that ignores cron, make all databases backups and exits. To use it when having already running container, use:
      docker compose run --rm backuper python -m backuper.main --single\n
      BE CAREFUL, if your setup if fine, this will upload backup files to cloud provider, so costs may apply.
    • There is runtime flag --debug-notifications that setup notifications, raise dummy exception and exits. This can help ensure notifications are working:
      docker compose run --rm backuper python -m backuper.main --debug-notifications\n
    "},{"location":"deployment/#kubernetes","title":"Kubernetes","text":"
    # backuper-deployment.yml\n\nkind: Namespace\napiVersion: v1\nmetadata:\n  name: backuper\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: backuper-secrets\n  namespace: backuper\ntype: Opaque\nstringData:\n  POSTGRESQL_DB1: ...\n  MYSQL_DB2: ...\n  MARIADB_DB3: ...\n  ZIP_ARCHIVE_PASSWORD: change_me\n  BACKUP_PROVIDER: \"name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\"\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  namespace: backuper\n  name: backuper\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: backuper\n  template:\n    metadata:\n      labels:\n        app: backuper\n    spec:\n      containers:\n        - image: rafsaf/backuper:latest\n          name: backuper\n          envFrom:\n            - secretRef:\n                name: backuper-secrets\n
    "},{"location":"deployment/#notes_1","title":"Notes","text":"
    • For hard debug you can set LOG_LEVEL: DEBUG and use (for brevity random pod name used):
      kubectl logs backuper-9c8b8b77d-z5xsc -n backuper\n
    • There is runtime flag --single that ignores cron, make all databases backups and exits. To use it when having already running container, use:
      kubectl exec --stdin --tty backuper-9c8b8b77d-z5xsc -n backuper -- runuser -u backuper -- python -m backuper.main --single\n
      BE CAREFUL, if your setup if fine, this will upload backup files to cloud provider, so costs may apply.
    • There is runtime flag --debug-notifications that setup notifications, raise dummy exception and exits. This can help ensure notifications are working:
      kubectl exec --stdin --tty backuper-9c8b8b77d-z5xsc -n backuper -- runuser -u backuper -- python -m backuper.main --debug-notifications\n
    "},{"location":"how_to_restore/","title":"How to restore","text":"

    To restore backups you already have in cloud, for sure you will need 7-zip, unzip or equivalent software to unzip the archive (and of course password ZIP_ARCHIVE_PASSWORD used for creating it in a first place). That step is ommited below.

    For below databases restore, you can for sure use backuper image itself (as it already has required software installed, for restore also, and must have network access to database). Tricky part can be \"how to deliver zipped backup file to backuper container\". This is also true for transporting it anywhere. Usual way is to use scp and for containers for docker compose and kubernetes respectively docker compose cp and kubectl cp.

    Other idea if you feel unhappy with passing your database backups around (even if password protected) would be to make the backup file public for a moment and available to download and use tools like curl to download it on destination place. If leaked, there is yet very strong cryptography to protect you. This should be sufficient for bunch of projects.

    "},{"location":"how_to_restore/#directory-and-single-file","title":"Directory and single file","text":"

    Just file or directory, copy them back where you want.

    "},{"location":"how_to_restore/#postgresql","title":"PostgreSQL","text":"

    Backup is made using pg_dump (see def _backup() params). To restore database, you will need psql https://www.postgresql.org/docs/current/app-psql.html and network access to database. If on debian/ubuntu, this is provided by apt package postgresql-client.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    psql -h localhost -p 5432 -U postgres database_name -W < backup_file.sql\n
    "},{"location":"how_to_restore/#mysql","title":"MySQL","text":"

    Backup is made using mysqldump (see def _backup() params). To restore database, you will need mysql https://dev.mysql.com/doc/refman/8.0/en/mysql.html and network access to database. If on debian/ubuntu, this is provided by apt package mysql-client.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    mysql -h localhost -P 3306 -u root -p database_name < backup_file.sql\n
    "},{"location":"how_to_restore/#mariadb","title":"MariaDB","text":"

    Backup is made using mariadb-dump (see def _backup() params). To restore database, you will need mysql or mariadb https://dev.mysql.com/doc/refman/8.0/en/mysql.html or https://mariadb.com/kb/en/mariadb-command-line-client/ and network access to database. If on debian/ubuntu, this is provided by apt package mysql-client or see https://mariadb.com/kb/en/mariadb-package-repository-setup-and-usage/.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    mariadb -h localhost -P 3306 -u root -p database_name < backup_file.sql\n

    "},{"location":"backup_targets/directory/","title":"Directory","text":""},{"location":"backup_targets/directory/#environment-variable","title":"Environment variable","text":"
    DIRECTORY_SOME_STRING=\"abs_path=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"DIRECTORY_\" will be handled as Directory. There can be multiple files paths definition for one backuper instance, for example DIRECTORY_FOO and DIRECTORY_BAR. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/directory/#params","title":"Params","text":"Name Type Description Default abs_path string[requried] Absolute path to folder for backup. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/directory/#examples","title":"Examples","text":"
    # 1. Directory /home/user/folder with backup every single minute\nDIRECTORY_FIRST='abs_path=/home/user/folder cron_rule=* * * * *'\n\n# 2. Directory /etc with backup on every night (UTC) at 05:00\nDIRECTORY_SECOND='abs_path=/etc cron_rule=0 5 * * *'\n\n# 3. Mounted directory /mnt/homedir with backup on every 6 hours at '15 with max number of backups of 20\nDIRECTORY_HOME_DIR='abs_path=/mnt/homedir cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/file/","title":"Single file","text":""},{"location":"backup_targets/file/#environment-variable","title":"Environment variable","text":"
    SINGLEFILE_SOME_STRING=\"abs_path=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"SINGLEFILE_\" will be handled as Single File. There can be multiple files paths definition for one backuper instance, for example SINGLEFILE_FOO and SINGLEFILE_BAR. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/file/#params","title":"Params","text":"Name Type Description Default abs_path string[requried] Absolute path to file for backup. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/file/#examples","title":"Examples","text":"
    # File /home/user/file.txt with backup every single minute\nSINGLEFILE_FIRST='abs_path=/home/user/file.txt cron_rule=* * * * *'\n\n# File /etc/hosts with backup on every night (UTC) at 05:00\nSINGLEFILE_SECOND='abs_path=/etc/hosts cron_rule=0 5 * * *'\n\n# File config.json in mounted dir /mnt/appname with backup on every 6 hours at '15 with max number of backups of 20\nSINGLEFILE_THIRD='abs_path=/mnt/appname/config.json cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/mariadb/","title":"MariaDB","text":""},{"location":"backup_targets/mariadb/#environment-variable","title":"Environment variable","text":"
    MARIADB_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"MARIADB_\" will be handled as MariaDB. There can be multiple files paths definition for one backuper instance, for example MARIADB_FOO_MY_DB1 and MARIADB_BAR_MY_DB2. Supported versions are: 10.11, 10.6, 10.5, 10.4. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/mariadb/#params","title":"Params","text":"Name Type Description Default password string[requried] Mariadb database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string Mariadb database username. root host string Mariadb database hostname. localhost port int Mariadb database port. 3306 db string Mariadb database name. mariadb max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mariadb/#examples","title":"Examples","text":"
    # 1. Local MariaDB with backup every single minute\nMARIADB_FIRST_DB='host=localhost port=3306 password=secret cron_rule=* * * * *'\n\n# 2. MariaDB in local network with backup on every night (UTC) at 05:00\nMARIADB_SECOND_DB='host=10.0.0.1 port=3306 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. MariaDB in local network with backup on every 6 hours at '15 with max number of backups of 20\nMARIADB_THIRD_DB='host=192.168.1.5 port=3306 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/mysql/","title":"MySQL","text":""},{"location":"backup_targets/mysql/#environment-variable","title":"Environment variable","text":"
    MYSQL_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"MYSQL_\" will be handled as MySQL. There can be multiple files paths definition for one backuper instance, for example MYSQL_FOO_MY_DB1 and MYSQL_BAR_MY_DB2. Supported versions are: 8.0, 5.7. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/mysql/#params","title":"Params","text":"Name Type Description Default password string[requried] MySQL database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string MySQL database username. root host string MySQL database hostname. localhost port int MySQL database port. 3306 db string MySQL database name. mysql max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mysql/#examples","title":"Examples","text":"
    # 1. Local MySQL with backup every single minute\nMYSQL_FIRST_DB='host=localhost port=3306 password=secret cron_rule=* * * * *'\n\n# 2. MySQL in local network with backup on every night (UTC) at 05:00\nMYSQL_SECOND_DB='host=10.0.0.1 port=3306 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. MySQL in local network with backup on every 6 hours at '15 with max number of backups of 20\nMYSQL_THIRD_DB='host=192.168.1.5 port=3306 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/postgresql/","title":"PostgreSQL","text":""},{"location":"backup_targets/postgresql/#environment-variable","title":"Environment variable","text":"
    POSTGRESQL_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"POSTGRESQL_\" will be handled as PostgreSQL. There can be multiple files paths definition for one backuper instance, for example POSTGRESQL_FOO_MY_DB1 and POSTGRESQL_BAR_MY_DB2. Supported versions are: 15, 14, 13, 12, 11. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/postgresql/#params","title":"Params","text":"Name Type Description Default password string[requried] PostgreSQL database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string PostgreSQL database username. postgres host string PostgreSQL database hostname. localhost port int PostgreSQL database port. 5432 db string PostgreSQL database name. postgres max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/postgresql/#examples","title":"Examples","text":"
    # 1. Local PostgreSQL with backup every single minute\nPOSTGRESQL_FIRST_DB='host=localhost port=5432 password=secret cron_rule=* * * * *'\n\n# 2. PostgreSQL in local network with backup on every night (UTC) at 05:00\nPOSTGRESQL_SECOND_DB='host=10.0.0.1 port=5432 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. PostgreSQL in local network with backup on every 6 hours at '15 with max number of backups of 20\nPOSTGRESQL_THIRD_DB='host=192.168.1.5 port=5432 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"notifications/discord/","title":"Discord","text":"

    It is possible to send messages to your Discord channels in events of failed backups.

    Integration is via Discord webhooks and environment variables.

    Follow their documentation https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks.

    You should be able to generate webhooks like \"https://discord.com/api/webhooks/1111111111/some-long-token\".

    "},{"location":"notifications/discord/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000. 1500"},{"location":"notifications/discord/#examples","title":"Examples:","text":"
    DISCORD_WEBHOOK_URL=\"https://discord.com/api/webhooks/1111111111/long-token\"\n
    "},{"location":"notifications/slack/","title":"Slack","text":"

    It is possible to send messages to your Slack channels in events of failed backups.

    Integration is via Slack webhooks and environment variables.

    Follow their documentation https://api.slack.com/messaging/webhooks#create_a_webhook.

    You should be able to generate webhooks like \"https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX\".

    "},{"location":"notifications/slack/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000. 1500"},{"location":"notifications/slack/#examples","title":"Examples:","text":"
    SLACK_WEBHOOK_URL=\"https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX\"\n
    "},{"location":"notifications/smtp/","title":"Email (SMTP)","text":"

    It is possible to send messages via email using SMTP protocol. Implementation uses STARTTLS so be sure you mail server support it. For technical details refer to https://docs.python.org/3/library/smtplib.html.

    Note, when any of params SMTP_HOST, SMTP_FROM_ADDR, SMTP_PASSWORD, SMTP_TO_ADDRS is set, all are required. If not provided, execption will be raised.

    "},{"location":"notifications/smtp/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default SMTP_HOST string[required] SMTP server host. - SMTP_FROM_ADDR string[required] Email address that will send emails. - SMTP_PASSWORD string[required] Password for SMTP_FROM_ADDR. - SMTP_TO_ADDRS string[required] Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com. - SMTP_PORT int SMTP server port. 587"},{"location":"notifications/smtp/#examples","title":"Examples:","text":"
    SMTP_HOST=\"pro2.mail.ovh.net\"\nSMTP_FROM_ADDR=\"test@example.com\"\nSMTP_PASSWORD=\"changeme\"\nSMTP_TO_ADDRS=\"me@example.com,other@example.com\"\nSMTP_PORT=587\n
    "},{"location":"providers/aws_s3/","title":"AWS S3","text":""},{"location":"providers/aws_s3/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=aws bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=eu-central-1\"\n

    Uses AWS S3 bucket for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=aws. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/aws_s3/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1, paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip. Usually this should be something unique for this backuper instance, for example k8s_foo_backuper. - region string[requried] Bucket region. - key_id string[requried] IAM user access key id, see Resources below. - key_secret string[requried] IAM user access key secret, see Resources below. - max_bandwidth int Max bandwith of file upload that is passed to aws sdk transfer config, see their docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig. null"},{"location":"providers/aws_s3/#examples","title":"Examples","text":"
    # 1. Bucket pets-bucket\nBACKUP_PROVIDER='name=aws bucket_name=pets-bucket bucket_upload_path=pets_backuper key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=eu-central-1'\n\n# 2. Bucket birds with other region\nBACKUP_PROVIDER='name=aws bucket_name=birds bucket_upload_path=birds_backuper key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=us-east-1'\n
    "},{"location":"providers/aws_s3/#resources","title":"Resources","text":""},{"location":"providers/aws_s3/#bucket-and-iam-walkthrough","title":"Bucket and IAM walkthrough","text":"

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/walkthrough1.html

    "},{"location":"providers/aws_s3/#giving-iam-user-required-permissions","title":"Giving IAM user required permissions","text":"

    Assuming your bucket name is my_bucket_name and upload path test-upload-path, 3 permissions are needed for IAM user (s3:ListBucket, s3:PutObject, s3:DeleteObject):

    {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Sid\": \"AllowList\",\n\"Effect\": \"Allow\",\n\"Action\": \"s3:ListBucket\",\n\"Resource\": \"arn:aws:s3:::my_bucket_name\",\n\"Condition\": {\n\"StringLike\": {\n\"s3:prefix\": \"test-upload-path/*\"\n}\n}\n},\n{\n\"Sid\": \"AllowPutGetDelete\",\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:PutObject\",\n\"s3:DeleteObject\"\n],\n\"Resource\": \"arn:aws:s3:::my_bucket_name/test-upload-path/*\"\n}\n]\n}\n

    "},{"location":"providers/azure/","title":"Azure Blob Storage","text":""},{"location":"providers/azure/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=azure container_name=my-backuper-instance connect_string=DefaultEndpointsProtocol=https;AccountName=accname;AccountKey=secret;EndpointSuffix=core.windows.net\"\n

    Uses Azure Blob Storage for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=azure. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/azure/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string azure to use Google Cloud Storage. - container_name string[requried] Storage account container name. It must be already created, backuper won't create new container. - connect_string string[requried] Connection string copied from your storage account \"Access keys\" section. -"},{"location":"providers/azure/#examples","title":"Examples","text":"
    # 1. Storage account accname and container name my-backuper-instance\nBACKUP_PROVIDER=\"name=azure container_name=my-backuper-instance connect_string=DefaultEndpointsProtocol=https;AccountName=accname;AccountKey=secret;EndpointSuffix=core.windows.net\"\n\n# 2. Storage account birds and container name birds\nBACKUP_PROVIDER=\"name=azure container_name=birds connect_string=DefaultEndpointsProtocol=https;AccountName=birds;AccountKey=secret;EndpointSuffix=core.windows.net\"\n
    "},{"location":"providers/azure/#resources","title":"Resources","text":""},{"location":"providers/azure/#creating-azure-storage-account","title":"Creating azure storage account","text":"

    https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal

    "},{"location":"providers/debug/","title":"Debug","text":""},{"location":"providers/debug/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=debug\"\n

    Uses only local files (folder inside container) for storing backup. This is meant only for debug purposes.

    If you absolutely must not upload backups to outside world, consider adding some persistant volume for folder where buckups live in the container, that is /var/lib/backuper/data.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=debug.

    "},{"location":"providers/debug/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string debug to use Debug. -"},{"location":"providers/debug/#examples","title":"Examples","text":"
    # 1. Debug provider\nBACKUP_PROVIDER='name=debug'\n
    "},{"location":"providers/google_cloud_storage/","title":"Google Cloud Storage","text":""},{"location":"providers/google_cloud_storage/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\"\n

    Uses Google Cloud Storage bucket for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=gcs. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/google_cloud_storage/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1, paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip. Usually this should be something unique for this backuper instance, for example k8s_foo_backuper. - service_account_base64 string[requried] Base64 JSON service account file created in IAM, with write and read access permissions to bucket, see Resources below. - chunk_size_mb int The size of a chunk of data transfered to GCS, consider lower value only if for example your internet connection is slow or you know what you are doing, 100MB is google default. 100 chunk_timeout_secs int The chunk of data transfered to GCS upload timeout, consider higher value only if for example your internet connection is slow or you know what you are doing, 60s is google default. 60"},{"location":"providers/google_cloud_storage/#examples","title":"Examples","text":"
    # 1. Bucket pets-bucket\nBACKUP_PROVIDER='name=gcs bucket_name=pets-bucket bucket_upload_path=pets_backuper service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo='\n\n# 2. Bucket birds with smaller chunk size\nBACKUP_PROVIDER='name=gcs bucket_name=birds bucket_upload_path=birds_backuper chunk_size_mb=25 chunk_timeout_secs=120 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo='\n
    "},{"location":"providers/google_cloud_storage/#resources","title":"Resources","text":""},{"location":"providers/google_cloud_storage/#creating-bucket","title":"Creating bucket","text":"

    https://cloud.google.com/storage/docs/creating-buckets

    "},{"location":"providers/google_cloud_storage/#creating-service-account","title":"Creating service account","text":"

    https://cloud.google.com/iam/docs/service-accounts-create

    "},{"location":"providers/google_cloud_storage/#giving-it-required-roles-to-service-account","title":"Giving it required roles to service account","text":"
    1. Go \"IAM and admin\" -> \"IAM\"

    2. Find your service account and update its roles

    Give it following roles so it will have read access for whole bucket \"my_bucket_name\" and admin access for only path prefix \"my_backuper_instance_1\" in bucket \"my_bucket_name\":

    1. Storage Object Admin (with IAM condition: NAME starts with projects/_/buckets/my_bucket_name/objects/my_backuper_instance_1)
    2. Storage Object Viewer (with IAM condition: NAME starts with projects/_/buckets/my_bucket_name)

    After sucessfully creating service account, create new private key with JSON type and download it. File similar to your_project_name-03189413be28.json will appear in your Downloads.

    To get base64 (without any new lines) from it, use command:

    cat your_project_name-03189413be28.json | base64 -w 0\n
    "},{"location":"providers/google_cloud_storage/#terraform","title":"Terraform","text":"

    If using terraform for managing cloud infra, Service Accounts definition can be following:

    resource \"google_service_account\" \"backuper-my_backuper_instance_1\" {\naccount_id   = \"backuper-my_backuper_instance_1\"\ndisplay_name = \"SA my_backuper_instance_1 for backuper bucket access\"\n}\n\nresource \"google_project_iam_member\" \"backuper-my_backuper_instance_1-iam-object-admin\" {\nproject = local.project_id\n  role    = \"roles/storage.objectAdmin\"\nmember  = \"serviceAccount:${google_service_account.backuper-my_backuper_instance_1.email}\"\ncondition {\ntitle      = \"object_admin_only_backuper_bucket_specific_path\"\nexpression = \"resource.name.startsWith(\\\"projects/_/buckets/my_bucket_name/objects/my_backuper_instance_1\\\")\"\n}\n}\nresource \"google_project_iam_member\" \"backuper-my_backuper_instance_1-iam-object-viewer\" {\nproject = local.project_id\n  role    = \"roles/storage.objectViewer\"\nmember  = \"serviceAccount:${google_service_account.backuper-my_backuper_instance_1.email}\"\n\ncondition {\ntitle      = \"object_viewer_only_backuper_bucket\"\nexpression = \"resource.name.startsWith(\\\"projects/_/buckets/my_bucket_name\\\")\"\n}\n}\n

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Backuper","text":""},{"location":"#backuper","title":"Backuper","text":"

    A tool for performing scheduled database backups and transferring encrypted data to secure public clouds, for home labs, hobby projects, etc., in environments such as k8s, docker, vms.

    Backups are in zip format using 7-zip, with strong AES-256 encryption under the hood.

    "},{"location":"#documentation","title":"Documentation","text":"
    • https://backuper.rafsaf.pl
    "},{"location":"#supported-backup-targets","title":"Supported backup targets","text":"
    • PostgreSQL (tested on 15, 14, 13, 12, 11)
    • MySQL (tested on 8.0, 5.7)
    • MariaDB (tested on 10.11, 10.6, 10.5, 10.4)
    • Single file
    • Directory
    "},{"location":"#supported-upload-providers","title":"Supported upload providers","text":"
    • Google Cloud Storage bucket
    • AWS S3 bucket
    • Azure Blob Storage
    • Debug (local)
    "},{"location":"#notifications","title":"Notifications","text":"
    • Discord
    • Email (SMTP)
    • Slack
    "},{"location":"#deployment-strategies","title":"Deployment strategies","text":"

    Using docker image: rafsaf/backuper:latest, see all tags on dockerhub

    • docker (docker compose) container
    • kubernetes deployment
    "},{"location":"#architectures","title":"Architectures","text":"
    • linux/amd64
    • linux/arm64
    "},{"location":"#example","title":"Example","text":"

    Everyday 5am backup of PostgreSQL database defined in the same file and running in docker container.

    # docker-compose.yml\n\nservices:\n  db:\n    image: postgres:15\n    environment:\n      - POSTGRES_PASSWORD=pwd\n  backuper:\n    image: rafsaf/backuper:latest\n    environment:\n      - POSTGRESQL_PG15=host=db password=pwd cron_rule=0 0 5 * * port=5432\n      - ZIP_ARCHIVE_PASSWORD=change_me\n      - BACKUP_PROVIDER=name=debug\n

    (NOTE this will use provider debug that store backups locally in the container).

    "},{"location":"#real-world-usage","title":"Real world usage","text":"

    The author actively uses backuper (with GCS) for one production project plemiona-planer.pl postgres database (both PRD and STG) and for bunch of homelab projects including self hosted Firefly III mariadb, Grafana postgres, KeyCloak postgres, Nextcloud postgres and configuration file, Minecraft server files, and two other postgres dbs for some demo projects.

    See how it looks for ~2GB size database:

    "},{"location":"configuration/","title":"Configuration","text":"

    Environemt variables

    Name Type Description Default ZIP_ARCHIVE_PASSWORD string[required] Zip archive password that all backups generated by this backuper instance will have. When it is lost, you lose access to your backups. Special characters are allowed since shlex quote is used around app, though not recommended so password can be used when using programs in terminal like unzip. - BACKUP_PROVIDER string[required] See Providers chapter, choosen backup provider for example GCS. - INSTANCE_NAME string Name of this backuper instance, will be used for example when sending fail messages. Defaults to system hostname. system hostname BACKUP_MAX_NUMBER int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days in backup target. Note this global default and can be overwritten by using max_backups param in specific targets. Min 1 and max 998. 7 BACKUP_MIN_RETENTION_DAYS int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Note this global default and can be overwritten by using min_retention_days param in specific targets. Min 0 and max 36600. 3 ROOT_MODE bool If false, process in container will start backuper using user with minimal permissions required. If true, it will run as root (it may help for example with file/directory backup permission issues in mounted volumes). false POSTGRESQL_... backup target syntax PostgreSQL database target, see PostgreSQL. - MYSQL_... backup target syntax MySQL database target, see MySQL. - MARIADB_... backup target syntax MariaDB database target, see MariaDB. - SINGLEFILE_... backup target syntax Single file database target, see Single file. - DIRECTORY_... backup target syntax Directory database target, see Directory. - DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000. 1500 SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000. 1500 SMTP_HOST string SMTP server host. - SMTP_FROM_ADDR string Email address that will send emails. - SMTP_PASSWORD string Password for SMTP_FROM_ADDR. - SMTP_TO_ADDRS string Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com. - SMTP_PORT int SMTP server port. 587 LOG_LEVEL string Case sensitive const log level, must be one of INFO, DEBUG, WARNING, ERROR, CRITICAL. INFO SUBPROCESS_TIMEOUT_SECS int Indicates how long subprocesses can last. Note that all backups are run from shell in subprocesses. Defaults to 3600 seconds which should be enough for even big dbs to make backup of. Min 5 and max 86400 (24h). 3600 ZIP_ARCHIVE_LEVEL int Compression level of 7-zip via -mx option: -mx[N] : set compression level: -mx1 (fastest) ... -mx9 (ultra). Defaults to 3 which should be sufficient and fast enough. Min 1 and max 9. 3 LOG_FOLDER_PATH string Path to store log files, for local development ./logs, in container /var/log/backuper. /var/log/backuper SIGTERM_TIMEOUT_SECS int Time in seconds on exit how long backuper will wait for ongoing backup threads before force killing them and exiting. Min 0 and max 86400 (24h). 30 ZIP_SKIP_INTEGRITY_CHECK bool By default set to false and after 7zip archive is created, integrity check runs on it. You can opt out this behaviour for performance reasons, use true. false BACKUPER_CPU_ARCHITECTURE string CPU architecture, supported amd64 and arm64. Docker container will set it automatically so probably do not change it. amd64

    "},{"location":"deployment/","title":"Deployment","text":"

    In general, use docker image rafsaf/backuper (here available tags on dockerhub), it supports both amd64 and arm64 architectures. Standard way would be to run it with docker compose or as a kubernetes deployment. If not sure, use latest.

    "},{"location":"deployment/#docker-compose","title":"Docker Compose","text":""},{"location":"deployment/#docker-compose-file","title":"Docker compose file","text":"
    # docker-compose.yml\n\nservices:\n  backuper:\n    container_name: backuper\n    image: rafsaf/backuper:latest\n    environment:\n      - POSTGRESQL_DB1=...\n      - MYSQL_DB2=...\n      - MARIADB_DB3=...\n\n      - ZIP_ARCHIVE_PASSWORD=change_me\n      - BACKUP_PROVIDER=name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\n
    "},{"location":"deployment/#notes","title":"Notes","text":"
    • For hard debug you can set LOG_LEVEL=DEBUG and use (container name is backuper):
      docker logs backuper\n
    • There is runtime flag --single that ignores cron, make all databases backups and exits. To use it when having already running container, use:
      docker compose run --rm backuper python -m backuper.main --single\n
      BE CAREFUL, if your setup if fine, this will upload backup files to cloud provider, so costs may apply.
    • There is runtime flag --debug-notifications that setup notifications, raise dummy exception and exits. This can help ensure notifications are working:
      docker compose run --rm backuper python -m backuper.main --debug-notifications\n
    "},{"location":"deployment/#kubernetes","title":"Kubernetes","text":"
    # backuper-deployment.yml\n\nkind: Namespace\napiVersion: v1\nmetadata:\n  name: backuper\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: backuper-secrets\n  namespace: backuper\ntype: Opaque\nstringData:\n  POSTGRESQL_DB1: ...\n  MYSQL_DB2: ...\n  MARIADB_DB3: ...\n  ZIP_ARCHIVE_PASSWORD: change_me\n  BACKUP_PROVIDER: \"name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\"\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  namespace: backuper\n  name: backuper\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: backuper\n  template:\n    metadata:\n      labels:\n        app: backuper\n    spec:\n      containers:\n        - image: rafsaf/backuper:latest\n          name: backuper\n          envFrom:\n            - secretRef:\n                name: backuper-secrets\n
    "},{"location":"deployment/#notes_1","title":"Notes","text":"
    • For hard debug you can set LOG_LEVEL: DEBUG and use (for brevity random pod name used):
      kubectl logs backuper-9c8b8b77d-z5xsc -n backuper\n
    • There is runtime flag --single that ignores cron, make all databases backups and exits. To use it when having already running container, use:
      kubectl exec --stdin --tty backuper-9c8b8b77d-z5xsc -n backuper -- runuser -u backuper -- python -m backuper.main --single\n
      BE CAREFUL, if your setup if fine, this will upload backup files to cloud provider, so costs may apply.
    • There is runtime flag --debug-notifications that setup notifications, raise dummy exception and exits. This can help ensure notifications are working:
      kubectl exec --stdin --tty backuper-9c8b8b77d-z5xsc -n backuper -- runuser -u backuper -- python -m backuper.main --debug-notifications\n
    "},{"location":"how_to_restore/","title":"How to restore","text":"

    To restore backups you already have in cloud, for sure you will need 7-zip, unzip or equivalent software to unzip the archive (and of course password ZIP_ARCHIVE_PASSWORD used for creating it in a first place). That step is ommited below.

    For below databases restore, you can for sure use backuper image itself (as it already has required software installed, for restore also, and must have network access to database). Tricky part can be \"how to deliver zipped backup file to backuper container\". This is also true for transporting it anywhere. Usual way is to use scp and for containers for docker compose and kubernetes respectively docker compose cp and kubectl cp.

    Other idea if you feel unhappy with passing your database backups around (even if password protected) would be to make the backup file public for a moment and available to download and use tools like curl to download it on destination place. If leaked, there is yet very strong cryptography to protect you. This should be sufficient for bunch of projects.

    "},{"location":"how_to_restore/#directory-and-single-file","title":"Directory and single file","text":"

    Just file or directory, copy them back where you want.

    "},{"location":"how_to_restore/#postgresql","title":"PostgreSQL","text":"

    Backup is made using pg_dump (see def _backup() params). To restore database, you will need psql https://www.postgresql.org/docs/current/app-psql.html and network access to database. If on debian/ubuntu, this is provided by apt package postgresql-client.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    psql -h localhost -p 5432 -U postgres database_name -W < backup_file.sql\n
    "},{"location":"how_to_restore/#mysql","title":"MySQL","text":"

    Backup is made using mysqldump (see def _backup() params). To restore database, you will need mysql https://dev.mysql.com/doc/refman/8.0/en/mysql.html and network access to database. If on debian/ubuntu, this is provided by apt package mysql-client.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    mysql -h localhost -P 3306 -u root -p database_name < backup_file.sql\n
    "},{"location":"how_to_restore/#mariadb","title":"MariaDB","text":"

    Backup is made using mariadb-dump (see def _backup() params). To restore database, you will need mysql or mariadb https://dev.mysql.com/doc/refman/8.0/en/mysql.html or https://mariadb.com/kb/en/mariadb-command-line-client/ and network access to database. If on debian/ubuntu, this is provided by apt package mysql-client or see https://mariadb.com/kb/en/mariadb-package-repository-setup-and-usage/.

    Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that:

    mariadb -h localhost -P 3306 -u root -p database_name < backup_file.sql\n

    "},{"location":"backup_targets/directory/","title":"Directory","text":""},{"location":"backup_targets/directory/#environment-variable","title":"Environment variable","text":"
    DIRECTORY_SOME_STRING=\"abs_path=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"DIRECTORY_\" will be handled as Directory. There can be multiple files paths definition for one backuper instance, for example DIRECTORY_FOO and DIRECTORY_BAR. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/directory/#params","title":"Params","text":"Name Type Description Default abs_path string[requried] Absolute path to folder for backup. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/directory/#examples","title":"Examples","text":"
    # 1. Directory /home/user/folder with backup every single minute\nDIRECTORY_FIRST='abs_path=/home/user/folder cron_rule=* * * * *'\n\n# 2. Directory /etc with backup on every night (UTC) at 05:00\nDIRECTORY_SECOND='abs_path=/etc cron_rule=0 5 * * *'\n\n# 3. Mounted directory /mnt/homedir with backup on every 6 hours at '15 with max number of backups of 20\nDIRECTORY_HOME_DIR='abs_path=/mnt/homedir cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/file/","title":"Single file","text":""},{"location":"backup_targets/file/#environment-variable","title":"Environment variable","text":"
    SINGLEFILE_SOME_STRING=\"abs_path=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"SINGLEFILE_\" will be handled as Single File. There can be multiple files paths definition for one backuper instance, for example SINGLEFILE_FOO and SINGLEFILE_BAR. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/file/#params","title":"Params","text":"Name Type Description Default abs_path string[requried] Absolute path to file for backup. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/file/#examples","title":"Examples","text":"
    # File /home/user/file.txt with backup every single minute\nSINGLEFILE_FIRST='abs_path=/home/user/file.txt cron_rule=* * * * *'\n\n# File /etc/hosts with backup on every night (UTC) at 05:00\nSINGLEFILE_SECOND='abs_path=/etc/hosts cron_rule=0 5 * * *'\n\n# File config.json in mounted dir /mnt/appname with backup on every 6 hours at '15 with max number of backups of 20\nSINGLEFILE_THIRD='abs_path=/mnt/appname/config.json cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/mariadb/","title":"MariaDB","text":""},{"location":"backup_targets/mariadb/#environment-variable","title":"Environment variable","text":"
    MARIADB_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"MARIADB_\" will be handled as MariaDB. There can be multiple files paths definition for one backuper instance, for example MARIADB_FOO_MY_DB1 and MARIADB_BAR_MY_DB2. Supported versions are: 10.11, 10.6, 10.5, 10.4. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/mariadb/#params","title":"Params","text":"Name Type Description Default password string[requried] Mariadb database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string Mariadb database username. root host string Mariadb database hostname. localhost port int Mariadb database port. 3306 db string Mariadb database name. mariadb max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mariadb/#examples","title":"Examples","text":"
    # 1. Local MariaDB with backup every single minute\nMARIADB_FIRST_DB='host=localhost port=3306 password=secret cron_rule=* * * * *'\n\n# 2. MariaDB in local network with backup on every night (UTC) at 05:00\nMARIADB_SECOND_DB='host=10.0.0.1 port=3306 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. MariaDB in local network with backup on every 6 hours at '15 with max number of backups of 20\nMARIADB_THIRD_DB='host=192.168.1.5 port=3306 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/mysql/","title":"MySQL","text":""},{"location":"backup_targets/mysql/#environment-variable","title":"Environment variable","text":"
    MYSQL_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"MYSQL_\" will be handled as MySQL. There can be multiple files paths definition for one backuper instance, for example MYSQL_FOO_MY_DB1 and MYSQL_BAR_MY_DB2. Supported versions are: 8.0, 5.7. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/mysql/#params","title":"Params","text":"Name Type Description Default password string[requried] MySQL database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string MySQL database username. root host string MySQL database hostname. localhost port int MySQL database port. 3306 db string MySQL database name. mysql max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mysql/#examples","title":"Examples","text":"
    # 1. Local MySQL with backup every single minute\nMYSQL_FIRST_DB='host=localhost port=3306 password=secret cron_rule=* * * * *'\n\n# 2. MySQL in local network with backup on every night (UTC) at 05:00\nMYSQL_SECOND_DB='host=10.0.0.1 port=3306 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. MySQL in local network with backup on every 6 hours at '15 with max number of backups of 20\nMYSQL_THIRD_DB='host=192.168.1.5 port=3306 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"backup_targets/postgresql/","title":"PostgreSQL","text":""},{"location":"backup_targets/postgresql/#environment-variable","title":"Environment variable","text":"
    POSTGRESQL_SOME_STRING=\"host=... password=... cron_rule=...\"\n

    Note

    Any environment variable that starts with \"POSTGRESQL_\" will be handled as PostgreSQL. There can be multiple files paths definition for one backuper instance, for example POSTGRESQL_FOO_MY_DB1 and POSTGRESQL_BAR_MY_DB2. Supported versions are: 15, 14, 13, 12, 11. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"backup_targets/postgresql/#params","title":"Params","text":"Name Type Description Default password string[requried] PostgreSQL database password. - cron_rule string[requried] Cron expression for backups, see https://crontab.guru/ for help. - user string PostgreSQL database username. postgres host string PostgreSQL database hostname. localhost port int PostgreSQL database port. 5432 db string PostgreSQL database name. postgres max_backups int Soft limit how many backups can live at once for backup target. Defaults to 7. This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28. Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998. Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600. Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/postgresql/#examples","title":"Examples","text":"
    # 1. Local PostgreSQL with backup every single minute\nPOSTGRESQL_FIRST_DB='host=localhost port=5432 password=secret cron_rule=* * * * *'\n\n# 2. PostgreSQL in local network with backup on every night (UTC) at 05:00\nPOSTGRESQL_SECOND_DB='host=10.0.0.1 port=5432 user=foo password=change_me! db=bar cron_rule=0 5 * * *'\n\n# 3. PostgreSQL in local network with backup on every 6 hours at '15 with max number of backups of 20\nPOSTGRESQL_THIRD_DB='host=192.168.1.5 port=5432 user=root password=change_me_please! db=project cron_rule=15 */3 * * * max_backups=20'\n
    "},{"location":"notifications/discord/","title":"Discord","text":"

    It is possible to send messages to your Discord channels in events of failed backups.

    Integration is via Discord webhooks and environment variables.

    Follow their documentation https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks.

    You should be able to generate webhooks like \"https://discord.com/api/webhooks/1111111111/some-long-token\".

    "},{"location":"notifications/discord/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000. 1500"},{"location":"notifications/discord/#examples","title":"Examples:","text":"
    DISCORD_WEBHOOK_URL=\"https://discord.com/api/webhooks/1111111111/long-token\"\n
    "},{"location":"notifications/slack/","title":"Slack","text":"

    It is possible to send messages to your Slack channels in events of failed backups.

    Integration is via Slack webhooks and environment variables.

    Follow their documentation https://api.slack.com/messaging/webhooks#create_a_webhook.

    You should be able to generate webhooks like \"https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX\".

    "},{"location":"notifications/slack/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000. 1500"},{"location":"notifications/slack/#examples","title":"Examples:","text":"
    SLACK_WEBHOOK_URL=\"https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX\"\n
    "},{"location":"notifications/smtp/","title":"Email (SMTP)","text":"

    It is possible to send messages via email using SMTP protocol. Implementation uses STARTTLS so be sure you mail server support it. For technical details refer to https://docs.python.org/3/library/smtplib.html.

    Note, when any of params SMTP_HOST, SMTP_FROM_ADDR, SMTP_PASSWORD, SMTP_TO_ADDRS is set, all are required. If not provided, execption will be raised.

    "},{"location":"notifications/smtp/#environemt-variables","title":"Environemt variables","text":"Name Type Description Default SMTP_HOST string[required] SMTP server host. - SMTP_FROM_ADDR string[required] Email address that will send emails. - SMTP_PASSWORD string[required] Password for SMTP_FROM_ADDR. - SMTP_TO_ADDRS string[required] Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com. - SMTP_PORT int SMTP server port. 587"},{"location":"notifications/smtp/#examples","title":"Examples:","text":"
    SMTP_HOST=\"pro2.mail.ovh.net\"\nSMTP_FROM_ADDR=\"test@example.com\"\nSMTP_PASSWORD=\"changeme\"\nSMTP_TO_ADDRS=\"me@example.com,other@example.com\"\nSMTP_PORT=587\n
    "},{"location":"providers/aws_s3/","title":"AWS S3","text":""},{"location":"providers/aws_s3/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=aws bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=eu-central-1\"\n

    Uses AWS S3 bucket for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=aws. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/aws_s3/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1, paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip. Usually this should be something unique for this backuper instance, for example k8s_foo_backuper. - region string[requried] Bucket region. - key_id string[requried] IAM user access key id, see Resources below. - key_secret string[requried] IAM user access key secret, see Resources below. - max_bandwidth int Max bandwith of file upload that is passed to aws sdk transfer config, see their docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig. null"},{"location":"providers/aws_s3/#examples","title":"Examples","text":"
    # 1. Bucket pets-bucket\nBACKUP_PROVIDER='name=aws bucket_name=pets-bucket bucket_upload_path=pets_backuper key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=eu-central-1'\n\n# 2. Bucket birds with other region\nBACKUP_PROVIDER='name=aws bucket_name=birds bucket_upload_path=birds_backuper key_id=AKIAU5JB5UQDL8C3K6UP key_secret=nFTXlO7nsPNNUj59tFE21Py9tOO8fwOtHNsr3YwN region=us-east-1'\n
    "},{"location":"providers/aws_s3/#resources","title":"Resources","text":""},{"location":"providers/aws_s3/#bucket-and-iam-walkthrough","title":"Bucket and IAM walkthrough","text":"

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/walkthrough1.html

    "},{"location":"providers/aws_s3/#giving-iam-user-required-permissions","title":"Giving IAM user required permissions","text":"

    Assuming your bucket name is my_bucket_name and upload path test-upload-path, 3 permissions are needed for IAM user (s3:ListBucket, s3:PutObject, s3:DeleteObject):

    {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Sid\": \"AllowList\",\n\"Effect\": \"Allow\",\n\"Action\": \"s3:ListBucket\",\n\"Resource\": \"arn:aws:s3:::my_bucket_name\",\n\"Condition\": {\n\"StringLike\": {\n\"s3:prefix\": \"test-upload-path/*\"\n}\n}\n},\n{\n\"Sid\": \"AllowPutGetDelete\",\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:PutObject\",\n\"s3:DeleteObject\"\n],\n\"Resource\": \"arn:aws:s3:::my_bucket_name/test-upload-path/*\"\n}\n]\n}\n

    "},{"location":"providers/azure/","title":"Azure Blob Storage","text":""},{"location":"providers/azure/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=azure container_name=my-backuper-instance connect_string=DefaultEndpointsProtocol=https;AccountName=accname;AccountKey=secret;EndpointSuffix=core.windows.net\"\n

    Uses Azure Blob Storage for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=azure. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/azure/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string azure to use Google Cloud Storage. - container_name string[requried] Storage account container name. It must be already created, backuper won't create new container. - connect_string string[requried] Connection string copied from your storage account \"Access keys\" section. -"},{"location":"providers/azure/#examples","title":"Examples","text":"
    # 1. Storage account accname and container name my-backuper-instance\nBACKUP_PROVIDER=\"name=azure container_name=my-backuper-instance connect_string=DefaultEndpointsProtocol=https;AccountName=accname;AccountKey=secret;EndpointSuffix=core.windows.net\"\n\n# 2. Storage account birds and container name birds\nBACKUP_PROVIDER=\"name=azure container_name=birds connect_string=DefaultEndpointsProtocol=https;AccountName=birds;AccountKey=secret;EndpointSuffix=core.windows.net\"\n
    "},{"location":"providers/azure/#resources","title":"Resources","text":""},{"location":"providers/azure/#creating-azure-storage-account","title":"Creating azure storage account","text":"

    https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal

    "},{"location":"providers/debug/","title":"Debug","text":""},{"location":"providers/debug/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=debug\"\n

    Uses only local files (folder inside container) for storing backup. This is meant only for debug purposes.

    If you absolutely must not upload backups to outside world, consider adding some persistant volume for folder where buckups live in the container, that is /var/lib/backuper/data.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=debug.

    "},{"location":"providers/debug/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string debug to use Debug. -"},{"location":"providers/debug/#examples","title":"Examples","text":"
    # 1. Debug provider\nBACKUP_PROVIDER='name=debug'\n
    "},{"location":"providers/google_cloud_storage/","title":"Google Cloud Storage","text":""},{"location":"providers/google_cloud_storage/#environment-variable","title":"Environment variable","text":"
    BACKUP_PROVIDER=\"name=gcs bucket_name=my_bucket_name bucket_upload_path=my_backuper_instance_1 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo=\"\n

    Uses Google Cloud Storage bucket for storing backups.

    Note

    There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using name, in this case name=gcs. Params must be included in value, splited by single space for example \"value1=1 value2=foo\".

    "},{"location":"providers/google_cloud_storage/#params","title":"Params","text":"Name Type Description Default name string[requried] Must be set literaly to string gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1, paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip. Usually this should be something unique for this backuper instance, for example k8s_foo_backuper. - service_account_base64 string[requried] Base64 JSON service account file created in IAM, with write and read access permissions to bucket, see Resources below. - chunk_size_mb int The size of a chunk of data transfered to GCS, consider lower value only if for example your internet connection is slow or you know what you are doing, 100MB is google default. 100 chunk_timeout_secs int The chunk of data transfered to GCS upload timeout, consider higher value only if for example your internet connection is slow or you know what you are doing, 60s is google default. 60"},{"location":"providers/google_cloud_storage/#examples","title":"Examples","text":"
    # 1. Bucket pets-bucket\nBACKUP_PROVIDER='name=gcs bucket_name=pets-bucket bucket_upload_path=pets_backuper service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo='\n\n# 2. Bucket birds with smaller chunk size\nBACKUP_PROVIDER='name=gcs bucket_name=birds bucket_upload_path=birds_backuper chunk_size_mb=25 chunk_timeout_secs=120 service_account_base64=Z29vZ2xlX3NlcnZpY2VfYWNjb3VudAo='\n
    "},{"location":"providers/google_cloud_storage/#resources","title":"Resources","text":""},{"location":"providers/google_cloud_storage/#creating-bucket","title":"Creating bucket","text":"

    https://cloud.google.com/storage/docs/creating-buckets

    "},{"location":"providers/google_cloud_storage/#creating-service-account","title":"Creating service account","text":"

    https://cloud.google.com/iam/docs/service-accounts-create

    "},{"location":"providers/google_cloud_storage/#giving-it-required-roles-to-service-account","title":"Giving it required roles to service account","text":"
    1. Go \"IAM and admin\" -> \"IAM\"

    2. Find your service account and update its roles

    Give it following roles so it will have read access for whole bucket \"my_bucket_name\" and admin access for only path prefix \"my_backuper_instance_1\" in bucket \"my_bucket_name\":

    1. Storage Object Admin (with IAM condition: NAME starts with projects/_/buckets/my_bucket_name/objects/my_backuper_instance_1)
    2. Storage Object Viewer (with IAM condition: NAME starts with projects/_/buckets/my_bucket_name)

    After sucessfully creating service account, create new private key with JSON type and download it. File similar to your_project_name-03189413be28.json will appear in your Downloads.

    To get base64 (without any new lines) from it, use command:

    cat your_project_name-03189413be28.json | base64 -w 0\n
    "},{"location":"providers/google_cloud_storage/#terraform","title":"Terraform","text":"

    If using terraform for managing cloud infra, Service Accounts definition can be following:

    resource \"google_service_account\" \"backuper-my_backuper_instance_1\" {\naccount_id   = \"backuper-my_backuper_instance_1\"\ndisplay_name = \"SA my_backuper_instance_1 for backuper bucket access\"\n}\n\nresource \"google_project_iam_member\" \"backuper-my_backuper_instance_1-iam-object-admin\" {\nproject = local.project_id\n  role    = \"roles/storage.objectAdmin\"\nmember  = \"serviceAccount:${google_service_account.backuper-my_backuper_instance_1.email}\"\ncondition {\ntitle      = \"object_admin_only_backuper_bucket_specific_path\"\nexpression = \"resource.name.startsWith(\\\"projects/_/buckets/my_bucket_name/objects/my_backuper_instance_1\\\")\"\n}\n}\nresource \"google_project_iam_member\" \"backuper-my_backuper_instance_1-iam-object-viewer\" {\nproject = local.project_id\n  role    = \"roles/storage.objectViewer\"\nmember  = \"serviceAccount:${google_service_account.backuper-my_backuper_instance_1.email}\"\n\ncondition {\ntitle      = \"object_viewer_only_backuper_bucket\"\nexpression = \"resource.name.startsWith(\\\"projects/_/buckets/my_bucket_name\\\")\"\n}\n}\n

    "}]} \ No newline at end of file diff --git a/5.0/sitemap.xml.gz b/5.0/sitemap.xml.gz index d22a83d..0952894 100644 Binary files a/5.0/sitemap.xml.gz and b/5.0/sitemap.xml.gz differ