diff --git a/5.0/index.html b/5.0/index.html index 6942b02..f9cca88 100644 --- a/5.0/index.html +++ b/5.0/index.html @@ -855,6 +855,27 @@ +
+ + + + + + + + + + + + + + + + + + + +
A tool for performing scheduled database backups and transferring encrypted data to secure public clouds, for home labs, hobby projects, etc., in environments such as k8s, docker, vms.
Backups are in zip
format using 7-zip, with strong AES-256 encryption under the hood.
Everyday 5am backup to Google Cloud Storage of PostgreSQL database defined in the same file and running in docker container.
+Everyday 5am backup of PostgreSQL database defined in the same file and running in docker container.
1 2 3 @@ -926,7 +947,7 @@
Using docker image:
Everyday 5am backup to Google Cloud Storage of PostgreSQL database defined in the same file and running in docker container. (NOTE this will use provider debug that store backups locally in the container). "},{"location":"#real-world-usage","title":"Real world usage","text":"The author actively uses backuper (with GCS) for one production project plemiona-planer.pl postgres database (both PRD and STG) and for bunch of homelab projects including self hosted Firefly III mariadb, Grafana postgres, KeyCloak postgres, Nextcloud postgres and configuration file, Minecraft server files, and two other postgres dbs for some demo projects. See how it looks for ~2GB size database: "},{"location":"configuration/","title":"Configuration","text":" Environemt variables Name Type Description Default ZIP_ARCHIVE_PASSWORD string[required] Zip archive password that all backups generated by this backuper instance will have. When it is lost, you lose access to your backups. Special characters are allowed since shlex quote is used around app, though not recommended so password can be used when using programs in terminal likeunzip . - BACKUP_PROVIDER string[required] See Providers chapter, choosen backup provider for example GCS. - INSTANCE_NAME string Name of this backuper instance, will be used for example when sending fail messages. Defaults to system hostname. system hostname BACKUP_MAX_NUMBER int Soft limit how many backups can live at once for backup target. Defaults to 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days in backup target. Note this global default and can be overwritten by using max_backups param in specific targets. Min 1 and max 998 . 7 BACKUP_MIN_RETENTION_DAYS int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Note this global default and can be overwritten by using min_retention_days param in specific targets. Min 0 and max 36600 . 3 ROOT_MODE bool If false , process in container will start backuper using user with minimal permissions required. If true , it will run as root (it may help for example with file/directory backup permission issues in mounted volumes). false POSTGRESQL_... backup target syntax PostgreSQL database target, see PostgreSQL. - MYSQL_... backup target syntax MySQL database target, see MySQL. - MARIADB_... backup target syntax MariaDB database target, see MariaDB. - SINGLEFILE_... backup target syntax Single file database target, see Single file. - DIRECTORY_... backup target syntax Directory database target, see Directory. - DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000 . 1500 SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000 . 1500 SMTP_HOST string SMTP server host. - SMTP_FROM_ADDR string Email address that will send emails. - SMTP_PASSWORD string Password for SMTP_FROM_ADDR . - SMTP_TO_ADDRS string Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com . - SMTP_PORT int SMTP server port. 587 LOG_LEVEL string Case sensitive const log level, must be one of INFO , DEBUG , WARNING , ERROR , CRITICAL . INFO SUBPROCESS_TIMEOUT_SECS int Indicates how long subprocesses can last. Note that all backups are run from shell in subprocesses. Defaults to 3600 seconds which should be enough for even big dbs to make backup of. Min 5 and max 86400 (24h). 3600 ZIP_ARCHIVE_LEVEL int Compression level of 7-zip via -mx option: -mx[N] : set compression level: -mx1 (fastest) ... -mx9 (ultra) . Defaults to 3 which should be sufficient and fast enough. Min 1 and max 9 . 3 LOG_FOLDER_PATH string Path to store log files, for local development ./logs , in container /var/log/backuper . /var/log/backuper SIGTERM_TIMEOUT_SECS int Time in seconds on exit how long backuper will wait for ongoing backup threads before force killing them and exiting. Min 0 and max 86400 (24h). 30 ZIP_SKIP_INTEGRITY_CHECK bool By default set to false and after 7zip archive is created, integrity check runs on it. You can opt out this behaviour for performance reasons, use true . false BACKUPER_CPU_ARCHITECTURE string CPU architecture, supported amd64 and arm64 . Docker container will set it automatically so probably do not change it. amd64 "},{"location":"deployment/","title":"Deployment","text":" In general, use docker image "},{"location":"deployment/#notes","title":"Notes","text":"
"},{"location":"deployment/#notes_1","title":"Notes","text":"
To restore backups you already have in cloud, for sure you will need For below databases restore, you can for sure use Other idea if you feel unhappy with passing your database backups around (even if password protected) would be to make the backup file public for a moment and available to download and use tools like Just file or directory, copy them back where you want. "},{"location":"how_to_restore/#postgresql","title":"PostgreSQL","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"how_to_restore/#mysql","title":"MySQL","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"how_to_restore/#mariadb","title":"MariaDB","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"backup_targets/directory/","title":"Directory","text":""},{"location":"backup_targets/directory/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"DIRECTORY_\" will be handled as Directory. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/directory/#examples","title":"Examples","text":" "},{"location":"backup_targets/file/","title":"Single file","text":""},{"location":"backup_targets/file/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"SINGLEFILE_\" will be handled as Single File. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/file/#examples","title":"Examples","text":" "},{"location":"backup_targets/mariadb/","title":"MariaDB","text":""},{"location":"backup_targets/mariadb/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"MARIADB_\" will be handled as MariaDB. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mariadb/#examples","title":"Examples","text":" "},{"location":"backup_targets/mysql/","title":"MySQL","text":""},{"location":"backup_targets/mysql/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"MYSQL_\" will be handled as MySQL. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mysql/#examples","title":"Examples","text":" "},{"location":"backup_targets/postgresql/","title":"PostgreSQL","text":""},{"location":"backup_targets/postgresql/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"POSTGRESQL_\" will be handled as PostgreSQL. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/postgresql/#examples","title":"Examples","text":" "},{"location":"notifications/discord/","title":"Discord","text":"It is possible to send messages to your Discord channels in events of failed backups. Integration is via Discord webhooks and environment variables. Follow their documentation https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks. You should be able to generate webhooks like 150 and max 10000 . 1500"},{"location":"notifications/discord/#examples","title":"Examples:","text":" "},{"location":"notifications/slack/","title":"Slack","text":"It is possible to send messages to your Slack channels in events of failed backups. Integration is via Slack webhooks and environment variables. Follow their documentation https://api.slack.com/messaging/webhooks#create_a_webhook. You should be able to generate webhooks like 150 and max 10000 . 1500"},{"location":"notifications/slack/#examples","title":"Examples:","text":" "},{"location":"notifications/smtp/","title":"Email (SMTP)","text":"It is possible to send messages via email using SMTP protocol. Implementation uses STARTTLS so be sure you mail server support it. For technical details refer to https://docs.python.org/3/library/smtplib.html. Note, when any of params SMTP_FROM_ADDR . - SMTP_TO_ADDRS string[required] Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com . - SMTP_PORT int SMTP server port. 587"},{"location":"notifications/smtp/#examples","title":"Examples:","text":" "},{"location":"providers/aws_s3/","title":"AWS S3","text":""},{"location":"providers/aws_s3/#environment-variable","title":"Environment variable","text":" Uses AWS S3 bucket for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1 , paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip . Usually this should be something unique for this backuper instance, for example k8s_foo_backuper . - region string[requried] Bucket region. - key_id string[requried] IAM user access key id, see Resources below. - key_secret string[requried] IAM user access key secret, see Resources below. - max_bandwidth int Max bandwith of file upload that is passed to aws sdk transfer config, see their docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig. null"},{"location":"providers/aws_s3/#examples","title":"Examples","text":" "},{"location":"providers/aws_s3/#resources","title":"Resources","text":""},{"location":"providers/aws_s3/#bucket-and-iam-walkthrough","title":"Bucket and IAM walkthrough","text":"https://docs.aws.amazon.com/AmazonS3/latest/userguide/walkthrough1.html "},{"location":"providers/aws_s3/#giving-iam-user-required-permissions","title":"Giving IAM user required permissions","text":"Assuming your bucket name is "},{"location":"providers/azure/","title":"Azure Blob Storage","text":""},{"location":"providers/azure/#environment-variable","title":"Environment variable","text":" Uses Azure Blob Storage for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using azure to use Google Cloud Storage. - container_name string[requried] Storage account container name. It must be already created, backuper won't create new container. - connect_string string[requried] Connection string copied from your storage account \"Access keys\" section. -"},{"location":"providers/azure/#examples","title":"Examples","text":" "},{"location":"providers/azure/#resources","title":"Resources","text":""},{"location":"providers/azure/#creating-azure-storage-account","title":"Creating azure storage account","text":"https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal "},{"location":"providers/debug/","title":"Debug","text":""},{"location":"providers/debug/#environment-variable","title":"Environment variable","text":" Uses only local files (folder inside container) for storing backup. This is meant only for debug purposes. If you absolutely must not upload backups to outside world, consider adding some persistant volume for folder where buckups live in the container, that is Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using debug to use Debug. -"},{"location":"providers/debug/#examples","title":"Examples","text":" "},{"location":"providers/google_cloud_storage/","title":"Google Cloud Storage","text":""},{"location":"providers/google_cloud_storage/#environment-variable","title":"Environment variable","text":" Uses Google Cloud Storage bucket for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1 , paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip . Usually this should be something unique for this backuper instance, for example k8s_foo_backuper . - service_account_base64 string[requried] Base64 JSON service account file created in IAM, with write and read access permissions to bucket, see Resources below. - chunk_size_mb int The size of a chunk of data transfered to GCS, consider lower value only if for example your internet connection is slow or you know what you are doing, 100MB is google default. 100 chunk_timeout_secs int The chunk of data transfered to GCS upload timeout, consider higher value only if for example your internet connection is slow or you know what you are doing, 60s is google default. 60"},{"location":"providers/google_cloud_storage/#examples","title":"Examples","text":" "},{"location":"providers/google_cloud_storage/#resources","title":"Resources","text":""},{"location":"providers/google_cloud_storage/#creating-bucket","title":"Creating bucket","text":"https://cloud.google.com/storage/docs/creating-buckets "},{"location":"providers/google_cloud_storage/#creating-service-account","title":"Creating service account","text":"https://cloud.google.com/iam/docs/service-accounts-create "},{"location":"providers/google_cloud_storage/#giving-it-required-roles-to-service-account","title":"Giving it required roles to service account","text":"
Give it following roles so it will have read access for whole bucket \"my_bucket_name\" and admin access for only path prefix \"my_backuper_instance_1\" in bucket \"my_bucket_name\":
After sucessfully creating service account, create new private key with JSON type and download it. File similar to To get base64 (without any new lines) from it, use command: "},{"location":"providers/google_cloud_storage/#terraform","title":"Terraform","text":"If using terraform for managing cloud infra, Service Accounts definition can be following: "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Backuper","text":""},{"location":"#backuper","title":"Backuper","text":" A tool for performing scheduled database backups and transferring encrypted data to secure public clouds, for home labs, hobby projects, etc., in environments such as k8s, docker, vms. Backups are in
Using docker image:
Everyday 5am backup of PostgreSQL database defined in the same file and running in docker container. (NOTE this will use provider debug that store backups locally in the container). "},{"location":"#real-world-usage","title":"Real world usage","text":"The author actively uses backuper (with GCS) for one production project plemiona-planer.pl postgres database (both PRD and STG) and for bunch of homelab projects including self hosted Firefly III mariadb, Grafana postgres, KeyCloak postgres, Nextcloud postgres and configuration file, Minecraft server files, and two other postgres dbs for some demo projects. See how it looks for ~2GB size database: "},{"location":"configuration/","title":"Configuration","text":" Environemt variables Name Type Description Default ZIP_ARCHIVE_PASSWORD string[required] Zip archive password that all backups generated by this backuper instance will have. When it is lost, you lose access to your backups. Special characters are allowed since shlex quote is used around app, though not recommended so password can be used when using programs in terminal likeunzip . - BACKUP_PROVIDER string[required] See Providers chapter, choosen backup provider for example GCS. - INSTANCE_NAME string Name of this backuper instance, will be used for example when sending fail messages. Defaults to system hostname. system hostname BACKUP_MAX_NUMBER int Soft limit how many backups can live at once for backup target. Defaults to 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days in backup target. Note this global default and can be overwritten by using max_backups param in specific targets. Min 1 and max 998 . 7 BACKUP_MIN_RETENTION_DAYS int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Note this global default and can be overwritten by using min_retention_days param in specific targets. Min 0 and max 36600 . 3 ROOT_MODE bool If false , process in container will start backuper using user with minimal permissions required. If true , it will run as root (it may help for example with file/directory backup permission issues in mounted volumes). false POSTGRESQL_... backup target syntax PostgreSQL database target, see PostgreSQL. - MYSQL_... backup target syntax MySQL database target, see MySQL. - MARIADB_... backup target syntax MariaDB database target, see MariaDB. - SINGLEFILE_... backup target syntax Single file database target, see Single file. - DIRECTORY_... backup target syntax Directory database target, see Directory. - DISCORD_WEBHOOK_URL http url Webhook URL for fail messages. - DISCORD_MAX_MSG_LEN int Maximum length of messages send to discord API. Sensible default used. Min 150 and max 10000 . 1500 SLACK_WEBHOOK_URL http url Webhook URL for fail messages. - SLACK_MAX_MSG_LEN int Maximum length of messages send to slack API. Sensible default used. Min 150 and max 10000 . 1500 SMTP_HOST string SMTP server host. - SMTP_FROM_ADDR string Email address that will send emails. - SMTP_PASSWORD string Password for SMTP_FROM_ADDR . - SMTP_TO_ADDRS string Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com . - SMTP_PORT int SMTP server port. 587 LOG_LEVEL string Case sensitive const log level, must be one of INFO , DEBUG , WARNING , ERROR , CRITICAL . INFO SUBPROCESS_TIMEOUT_SECS int Indicates how long subprocesses can last. Note that all backups are run from shell in subprocesses. Defaults to 3600 seconds which should be enough for even big dbs to make backup of. Min 5 and max 86400 (24h). 3600 ZIP_ARCHIVE_LEVEL int Compression level of 7-zip via -mx option: -mx[N] : set compression level: -mx1 (fastest) ... -mx9 (ultra) . Defaults to 3 which should be sufficient and fast enough. Min 1 and max 9 . 3 LOG_FOLDER_PATH string Path to store log files, for local development ./logs , in container /var/log/backuper . /var/log/backuper SIGTERM_TIMEOUT_SECS int Time in seconds on exit how long backuper will wait for ongoing backup threads before force killing them and exiting. Min 0 and max 86400 (24h). 30 ZIP_SKIP_INTEGRITY_CHECK bool By default set to false and after 7zip archive is created, integrity check runs on it. You can opt out this behaviour for performance reasons, use true . false BACKUPER_CPU_ARCHITECTURE string CPU architecture, supported amd64 and arm64 . Docker container will set it automatically so probably do not change it. amd64 "},{"location":"deployment/","title":"Deployment","text":" In general, use docker image "},{"location":"deployment/#notes","title":"Notes","text":"
"},{"location":"deployment/#notes_1","title":"Notes","text":"
To restore backups you already have in cloud, for sure you will need For below databases restore, you can for sure use Other idea if you feel unhappy with passing your database backups around (even if password protected) would be to make the backup file public for a moment and available to download and use tools like Just file or directory, copy them back where you want. "},{"location":"how_to_restore/#postgresql","title":"PostgreSQL","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"how_to_restore/#mysql","title":"MySQL","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"how_to_restore/#mariadb","title":"MariaDB","text":"Backup is made using Follow docs (backuper creates typical SQL file backups, nothing special about them), but command will look something like that: "},{"location":"backup_targets/directory/","title":"Directory","text":""},{"location":"backup_targets/directory/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"DIRECTORY_\" will be handled as Directory. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/directory/#examples","title":"Examples","text":" "},{"location":"backup_targets/file/","title":"Single file","text":""},{"location":"backup_targets/file/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"SINGLEFILE_\" will be handled as Single File. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/file/#examples","title":"Examples","text":" "},{"location":"backup_targets/mariadb/","title":"MariaDB","text":""},{"location":"backup_targets/mariadb/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"MARIADB_\" will be handled as MariaDB. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mariadb/#examples","title":"Examples","text":" "},{"location":"backup_targets/mysql/","title":"MySQL","text":""},{"location":"backup_targets/mysql/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"MYSQL_\" will be handled as MySQL. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/mysql/#examples","title":"Examples","text":" "},{"location":"backup_targets/postgresql/","title":"PostgreSQL","text":""},{"location":"backup_targets/postgresql/#environment-variable","title":"Environment variable","text":" Note Any environment variable that starts with \"POSTGRESQL_\" will be handled as PostgreSQL. There can be multiple files paths definition for one backuper instance, for example 7 . This must makes sense with cron expression you use. For example if you want to have 7 day retention, and make backups at 5:00, max_backups=7 is fine, but if you make 4 backups per day, you would need max_backups=28 . Limit is soft and can be exceeded if no backup is older than value specified in min_retention_days. Min 1 and max 998 . Defaults to enviornment variable BACKUP_MAX_NUMBER, see Configuration. BACKUP_MAX_NUMBER min_retention_days int Hard minimum backups lifetime in days. Backuper won't ever delete files before, regardles of other options. Min 0 and max 36600 . Defaults to enviornment variable BACKUP_MIN_RETENTION_DAYS, see Configuration. BACKUP_MIN_RETENTION_DAYS"},{"location":"backup_targets/postgresql/#examples","title":"Examples","text":" "},{"location":"notifications/discord/","title":"Discord","text":"It is possible to send messages to your Discord channels in events of failed backups. Integration is via Discord webhooks and environment variables. Follow their documentation https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks. You should be able to generate webhooks like 150 and max 10000 . 1500"},{"location":"notifications/discord/#examples","title":"Examples:","text":" "},{"location":"notifications/slack/","title":"Slack","text":"It is possible to send messages to your Slack channels in events of failed backups. Integration is via Slack webhooks and environment variables. Follow their documentation https://api.slack.com/messaging/webhooks#create_a_webhook. You should be able to generate webhooks like 150 and max 10000 . 1500"},{"location":"notifications/slack/#examples","title":"Examples:","text":" "},{"location":"notifications/smtp/","title":"Email (SMTP)","text":"It is possible to send messages via email using SMTP protocol. Implementation uses STARTTLS so be sure you mail server support it. For technical details refer to https://docs.python.org/3/library/smtplib.html. Note, when any of params SMTP_FROM_ADDR . - SMTP_TO_ADDRS string[required] Comma separated list of email addresses to send emails. For example email1@example.com,email2@example.com . - SMTP_PORT int SMTP server port. 587"},{"location":"notifications/smtp/#examples","title":"Examples:","text":" "},{"location":"providers/aws_s3/","title":"AWS S3","text":""},{"location":"providers/aws_s3/#environment-variable","title":"Environment variable","text":" Uses AWS S3 bucket for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1 , paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip . Usually this should be something unique for this backuper instance, for example k8s_foo_backuper . - region string[requried] Bucket region. - key_id string[requried] IAM user access key id, see Resources below. - key_secret string[requried] IAM user access key secret, see Resources below. - max_bandwidth int Max bandwith of file upload that is passed to aws sdk transfer config, see their docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig. null"},{"location":"providers/aws_s3/#examples","title":"Examples","text":" "},{"location":"providers/aws_s3/#resources","title":"Resources","text":""},{"location":"providers/aws_s3/#bucket-and-iam-walkthrough","title":"Bucket and IAM walkthrough","text":"https://docs.aws.amazon.com/AmazonS3/latest/userguide/walkthrough1.html "},{"location":"providers/aws_s3/#giving-iam-user-required-permissions","title":"Giving IAM user required permissions","text":"Assuming your bucket name is "},{"location":"providers/azure/","title":"Azure Blob Storage","text":""},{"location":"providers/azure/#environment-variable","title":"Environment variable","text":" Uses Azure Blob Storage for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using azure to use Google Cloud Storage. - container_name string[requried] Storage account container name. It must be already created, backuper won't create new container. - connect_string string[requried] Connection string copied from your storage account \"Access keys\" section. -"},{"location":"providers/azure/#examples","title":"Examples","text":" "},{"location":"providers/azure/#resources","title":"Resources","text":""},{"location":"providers/azure/#creating-azure-storage-account","title":"Creating azure storage account","text":"https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal "},{"location":"providers/debug/","title":"Debug","text":""},{"location":"providers/debug/#environment-variable","title":"Environment variable","text":" Uses only local files (folder inside container) for storing backup. This is meant only for debug purposes. If you absolutely must not upload backups to outside world, consider adding some persistant volume for folder where buckups live in the container, that is Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using debug to use Debug. -"},{"location":"providers/debug/#examples","title":"Examples","text":" "},{"location":"providers/google_cloud_storage/","title":"Google Cloud Storage","text":""},{"location":"providers/google_cloud_storage/#environment-variable","title":"Environment variable","text":" Uses Google Cloud Storage bucket for storing backups. Note There can be only one upload provider defined per app, using BACKUP_PROVIDER environemnt variable. It's type is guessed by using gcs to use Google Cloud Storage. - bucket_name string[requried] Your globally unique bucket name. - bucket_upload_path string[requried] Prefix that every created backup will have, for example if it is equal to my_backuper_instance_1 , paths to backups will look like my_backuper_instance_1/your_backup_target_eg_postgresql/file123.zip . Usually this should be something unique for this backuper instance, for example k8s_foo_backuper . - service_account_base64 string[requried] Base64 JSON service account file created in IAM, with write and read access permissions to bucket, see Resources below. - chunk_size_mb int The size of a chunk of data transfered to GCS, consider lower value only if for example your internet connection is slow or you know what you are doing, 100MB is google default. 100 chunk_timeout_secs int The chunk of data transfered to GCS upload timeout, consider higher value only if for example your internet connection is slow or you know what you are doing, 60s is google default. 60"},{"location":"providers/google_cloud_storage/#examples","title":"Examples","text":" "},{"location":"providers/google_cloud_storage/#resources","title":"Resources","text":""},{"location":"providers/google_cloud_storage/#creating-bucket","title":"Creating bucket","text":"https://cloud.google.com/storage/docs/creating-buckets "},{"location":"providers/google_cloud_storage/#creating-service-account","title":"Creating service account","text":"https://cloud.google.com/iam/docs/service-accounts-create "},{"location":"providers/google_cloud_storage/#giving-it-required-roles-to-service-account","title":"Giving it required roles to service account","text":"
Give it following roles so it will have read access for whole bucket \"my_bucket_name\" and admin access for only path prefix \"my_backuper_instance_1\" in bucket \"my_bucket_name\":
After sucessfully creating service account, create new private key with JSON type and download it. File similar to To get base64 (without any new lines) from it, use command: "},{"location":"providers/google_cloud_storage/#terraform","title":"Terraform","text":"If using terraform for managing cloud infra, Service Accounts definition can be following: "}]} \ No newline at end of file diff --git a/5.0/sitemap.xml.gz b/5.0/sitemap.xml.gz index d22a83d..0952894 100644 Binary files a/5.0/sitemap.xml.gz and b/5.0/sitemap.xml.gz differ |