Skip to content

Example Proxmox

DatuX edited this page Oct 26, 2022 · 1 revision

Backup a proxmox cluster with HA replication

Due to the nature of proxmox we had to make a few enhancements to zfs-autobackup. This will probably also benefit other systems that use their own replication in combination with zfs-autobackup.

All data under rpool/data can be on multiple nodes of the cluster. The naming of those filesystem is unique over the whole cluster. Because of this we should backup rpool/data of all nodes to the same destination. This way we wont have duplicate backups of the filesystems that are replicated. Because of various options, you can even migrate hosts and zfs-autobackup will be fine. (and it will get the next backup from the new node automatically)

In the example below we have 3 nodes, named pve1, pve2 and pve3.

Preparing the proxmox nodes

No preparation is needed, the script will take care of everything. You only need to setup the ssh keys, so that the backup server can access the proxmox server.

TIP: make sure your backup server is firewalled and cannot be reached from any production machine.

SSH config on backup server

I use ~/.ssh/config to specify how to reach the various hosts.

In this example we are making an offsite copy and use portforwarding to reach the proxmox machines:

Host *
    ControlPath ~/.ssh/control-master-%r@%h:%p
    ControlMaster auto
    ControlPersist 3600
    Compression yes

Host pve1
    Hostname some.host.com
    Port 10001

Host pve2
    Hostname some.host.com
    Port 10002

Host pve3
    Hostname some.host.com
    Port 10003

Backup script

I use the following backup script on the backup server.

Adjust the variables HOSTS TARGET and NAME to your needs.

Remember, if you need specific ssh-settings, use ~/.ssh/config.

#!/bin/bash

HOSTS="pve1 pve2 pve3"
TARGET=rpool/pvebackups
NAME=prox

zfs create -p $TARGET/data &>/dev/null
for HOST in $HOSTS; do

  echo "################################### RPOOL $HOST"

  # enable backup
  ssh $HOST "zfs set autobackup:rpool_$NAME=child rpool/ROOT"

  #backup rpool to specific directory per host
  zfs create -p $TARGET/rpools/$HOST &>/dev/null
  zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST rpool_$NAME $TARGET/rpools/$HOST --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose   --no-holds   $@

  zabbix-job-status backup_$HOST""_rpool_$NAME daily $? >/dev/null 2>/dev/null


  echo "################################### DATA $HOST"

  # enable backup
  ssh $HOST "zfs set autobackup:data_$NAME=child rpool/data"

  #backup data filesystems to a common directory
  zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST data_$NAME $TARGET/data --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose  --ignore-replicated --min-change 300000 --no-holds   $@

  zabbix-job-status backup_$HOST""_data_$NAME daily $? >/dev/null 2>/dev/null

done

This script will also send the backup status to Zabbix. (if you've installed my zabbix-job-status script https://github.com/psy0rz/stuff/tree/master/zabbix-jobs)