forked from bareos/bareos
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.glusterfs
151 lines (123 loc) · 4.93 KB
/
README.glusterfs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
Integrating GlusterFS and BAREOS.
The gluster integration uses so called URIs that are mostly analog to the
way you specify gluster volumes in for example qemu.
Syntax:
gluster[+transport]://[server[:port]]/volname[/dir][?socket=...]
'gluster' is the protocol.
'transport' specifies the transport type used to connect to gluster
management daemon (glusterd). Valid transport types are tcp, unix
and rdma. If a transport type isn't specified, then tcp type is assumed.
'server' specifies the server where the volume file specification for
the given volume resides. This can be either hostname, ipv4 address
or ipv6 address. ipv6 address needs to be within square brackets [ ].
If transport type is 'unix', then 'server' field should not be specifed.
The 'socket' field needs to be populated with the path to unix domain
socket.
'port' is the port number on which glusterd is listening. This is optional
and if not specified, QEMU will send 0 which will make gluster to use the
default port. If the transport type is unix, then 'port' should not be
specified.
'volname' is the name of the gluster volume which contains the data that
we need to be backup.
'dir' is an optional base directory on the 'volname'
Examples:
gluster://1.2.3.4/testvol[/dir]
gluster+tcp://1.2.3.4/testvol[/dir]
gluster+tcp://1.2.3.4:24007/testvol[/dir]
gluster+tcp://[1:2:3:4:5:6:7:8]/testvol[/dir]
gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol[/dir]
gluster+tcp://server.domain.com:24007/testvol[/dir]
gluster+unix:///testvol[/dir]?socket=/tmp/glusterd.socket
gluster+rdma://1.2.3.4:24007/testvol[/dir]
- Using glusterfs as backingstore for backups
For this you need to install the storage-glusterfs package which contains
the libbareossd-gfapi.so shared object which implements a dynamic loaded
storage backend.
- bareos-sd.conf
Device {
Name = GlusterStorage
Device Type = gfapi
Media Type = GlusterFile
Archive Device = <some description>
Device Options = "uri=gluster://<volume_spec>"
LabelMedia = yes
Random Access = Yes
AutomaticMount = yes
RemovableMedia = no
AlwaysOpen = no
}
- bareos-dir.conf
Storage {
Name = Gluster
Address = <sd-hostname>
Password = "<password>"
Device = GlusterStorage
Media Type = GlusterFile
Maximum Concurrent Jobs = 10
}
In the above GlusterFile is a sample MediaType if you use multiple Gluster storage
devices in one storage daemon make sure you use different Media Types for each
storage as the same Media Type means BAREOS thinks it can swap volumes between
different storages which is probably not going to work if you point to different
locations on your Gluster volume(s).
- Backing up data on a Gluster Volume
For this you need to install the filedaemon-glusterfs-plugin package which contains
the gfapi-fd.so shared object which implements a dynamic loaded filed plugin.
For backing up Gluster the gfapi-fd.so implements two types of backups one in which
it will crawl the filesystem itself and one where it uses glusterfind to create a
list of files to backup. The glusterfind script is new since Gluster 3.7 so you need
a version that includes it to be able to use it.
A backup made using the gfapi-fd.so needs a Job and Fileset definition. For the local
crawling example this would look somewhat like this:
Job {
Name = "gfapitest"
JobDefs = "DefaultJob"
FileSet = "GFAPI"
}
FileSet {
Name = "GFAPI"
Include {
Options {
aclsupport = yes
xattrsupport = yes
signature = MD5
}
Plugin = "gfapi:volume=gluster\\://<volume_spec>:"
}
}
When you want to use glusterfind you should start by reading the documentation on glusterfind
and the setup it needs before configuring things in BAREOS. You need to create at least a new
session for glusterfind that you are going to use for backing up the gluster volume. There is
a wrapper named bareos-glusterfind-wrapper which is used in the pre and post definition of a
backup Job to interact with glusterfind.
For a backup with glusterfind the FileSet and Job will look a little different then the one
above as you need to interact with glusterfind to get the actual filelist that the backup should use.
Job {
Name = "gfapitest"
JobDefs = "DefaultJob"
FileSet = "GFAPI"
RunScript {
Runs On Client = Yes
RunsWhen = Before
FailJobOnError = Yes
Command = "<bareos_script_dir>/bareos-glusterfind-wrapper -l %l -s %m prebackup <volume_name> <output_file>"
}
RunScript {
Runs On Success = Yes
RunsOnFailure = Yes
Runs On Client = Yes
RunsWhen = After
Command = "<bareos_script_dir>/bareos-glusterfind-wrapper postbackup <volume_name> <output_file>"
}
}
FileSet {
Name = "GFAPI"
Include {
Options {
aclsupport = yes
xattrsupport = yes
signature = MD5
}
Plugin = "gfapi:volume=gluster\\://<volume_spec>:gffilelist=<output_file>"
}
}