-
Notifications
You must be signed in to change notification settings - Fork 62
robinhood_v3_admin_doc
|
Robinhood relies on a database backend to store a filesystem metadata replicate. Currently, it can use MySQL as database, or MariaDB (community version of MySQL). MySQL/MariaDB offer a choice of DB engines like InnoDB, MyISAM, TokuDB, ...
Running the database mainly determine the hardware requirements for a robinhood installation in terms of disk, memory and CPU:
- It is recommended to store the database on a dedicated disk, different from the system.
- Required disk space is around 1KB per entry (e.g. 100GB for 100 million entries, 1TB for 1 billion entries...). This sizing is influenced by filesystems contents profile like entry name and path length, stripe width...
- A writeback cache protected from power loss greatly improves robinhood ingest rate without compromising database consistency.
- SSDs are better for databases than classic spinning disks. Even if using a writeback cache, SSD will speed up random reads on the database, which are involved for reporting and policy scheduling, and if the memory cannot hold the whole DB. SSDs also offer much more IOPS if you don't have a write-back cache.
- The more memory, the better. Of course, the ideal configuration is to have all the DB cached in memory, which is currently possible with a few hundreds millions of entries. Else the DB engine has to read from disk in some cases, which results in lower performance. Don't forget to tune database memory pool size whose default is ridiculously low, else all your gigabytes of memory will be useless.
- Benchmarks show it is better to have a higher CPU frequency vs. more CPU cores. E.g. prefer 3GHz 8 cores than 2.5GHz 12 cores.
Robinhood requires:
- Access to the filesystem to manage: it must run on a filesystem client.
- Access to a MySQL or MariaDB database.
- It is recommended to run Robinhood on the same machine as the database server to avoid network latency for accessing the database.
- If it is not possible, the Robinhood machine must at least be configured as a database client.
The following software is required on robinhood machine:
- MySQL or MariaDB client >= 5.1.6
- glib2 >= 2.16
- mailx (for sending mail alerts)
- Robinhood supports all POSIX filesystems, and provides exclusive features for all Lustre versions since 1.8.
- Specific requirements for Lustre/HSM feature (robinhood "lhsm" status manager):
- Lustre >= 2.5 (both client and servers)
- libuuid
- Specific requirements for backup feature (robinhood "backup" status manager):
- Lustre >= 2.0 (both client and servers)
Pre-generated RPMs are available on sourceforge download center for the following configurations:
- x86_64 architectures, RHEL 5/6/7 families.
- Built with the default MySQL or MariaDB version for the given RHEL release.
- Filesystems: POSIX, Lustre 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8
- robinhood-adm package provides administration helpers for database and filesystem management (rbh-config command). It is to be installed on robinhood host. It can also be helpful on Lustre MDS with Lustre 2.x.
- robinhood-posix package provides Robinhood for POSIX filesystems (include binaries, init or systemd service, man pages, configuration templates, policy-specific plugins...).
- robinhood-lustre package provides Robinhood with specific features for Lustre filesystems (include binaries, init or systemd service, man pages, configuration templates, policy-specific plugins...).
- Install build requirements:
yum install -y git autogen rpm-build autoconf automake gcc libtool \ glib2-devel libattr-devel mariadb-devel mailx bison flex
- Plus, for Lustre:
yum install -y lustre-client libuuid-devel
- Retrieve Robinhood tarball from sourceforge download center and untar it:
tar zxf robinhood-3.0.tar.gz cd robinhood-3.0
- or: Clone robinhood git repository and initialize the code tree:
git clone https://github.com/cea-hpc/robinhood.git cd robinhood.git . autogen.sh
Then, build robinhood RPMs by running:
./configure make rpm
Specific configure options:
- --prefix=''path'': generate RPMs to install robinhood in an alternative location (default is /usr).
- --disable-lustre: force building robinhood for POSIX only, even if Lustre is installed on the machine.
- Other options are available. To list them all, run: ./configure --help
Robinhood needs a MySQL or MariaDB database for storing its data. This database can run on a different host from Robinhood node. However, a common configuration is to install robinhood on the DB host, to reduce DB request latency.
Install 'mysql' and 'mysql-server' packages on the node where you want to run the database engine.
Start the database engine:
- On EL6: service mysqld start
- On EL7: systemctl start mariadb
Robinhood needs 1 database per managed filesystem.
Basic configuration: for common needs, you can simply create the database using the helper command:
rbh-config create_db [options]
Advanced configuration : The helper above is for most common configurations. If you have specific needs in terms of access control, database ownership, etc. you should create and setup the database using mysql commands directly, for example:
mysqladmin create robinhood_db mysql robinhood_db mysql> create user robinhood identified by password; mysql> GRANT USAGE ON robinhood_db.* TO 'robinhood'@'localhost' ; mysql> GRANT ALL PRIVILEGES ON robinhood_db.* TO 'robinhood'@'localhost; mysql> GRANT SUPER ON *.* TO 'robinhood'@'localhost' IDENTIFIED BY password ; mysql> FLUSH PRIVILEGES;
Checking access rights:
mysql> SHOW GRANTS FOR robinhood;
Checking DB connexion:
mysql --user=robinhood --password=password --host=db_host robinhood_db
Initially, the database schema is empty. Robinhood will create it the first time it starts.
In robinhood configuration, access to the database is configured in a ListManager block:
ListManager { MySQL { server = db_host; db = robinhood_db; user = robinhood; password_file = /etc/robinhood.d/.dbpassword; } }
In the example above, /etc/robinhood.d/.dbpassword is a file with restricted access (root/600) that contains the password to access the database.
/etc/my.cnf example
Here is an example of tunings in /etc/my.cnf to improve database performances. It is a least recommended to tune the value of innodb_buffer_pool_size and innodb_log_file_size, else you would get poor performance.
# innodb_buffer_pool_size: recommended value is 80% of physical memory innodb_buffer_pool_size = 100G innodb_additional_mem_pool_size = 16M innodb_max_dirty_pages_pct = 20 innodb_file_per_table = 1 innodb_data_file_path = ibdata1:100M:autoextend innodb_write_io_threads = 32 innodb_read_io_threads = 32 innodb_flush_method=O_DIRECT innodb_io_capacity=100000 innodb_autoinc_lock_mode=2 innodb_thread_concurrency=0 innodb_log_buffer_size=256M # This parameter appears to have a significant impact on performances: # see the following article to tune it appropriately: # http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size innodb_log_file_size=500M innodb_log_files_in_group=4 innodb_lock_wait_timeout=120 [mysqld_safe] open-files-limit=2048
Need more IOPS?
To manage transactions efficiently, innodb needs a storage backend with high IOPS performances. You can monitor you disk stress by running sar -d on your DB storage device: if %util field is close to 100%, your database rate is limited by disk IOPS. In this case you have the choice between these 2 solutions, depending on how critical is your robinhood DB content:
- Safe (needs specific hardware): put your DB on a SSD device, or use a write-back capable storage that is protected against power-failures. In this case, no DB operation can be lost.
- Cheap (and unsafe): add this tuning to /etc/my.cnf: innodb_flush_log_at_trx_commit=2. This results in flushing transactions to disk only every second, which dramatically reduce the required IO rate. The risk is to loose the last second of recorded information if the DB host crashes. This is affordable if you use to scan your filesystem (the missing information will be added to the database during the next scan). If you read Lustre changelogs, then you will need to scan your filesystem after a DB server failure.
This little script is very convenient to analyze your database performance and it often suggests relevant tunings: http://mysqltuner.pl
Robinhood access to Lustre is somehow particular as it mainly performs metadata calls to the filesystem unlike most clients that perform I/Os. So tuning the Lustre client where robinhood runs may be different from the usual client tunings. In particular, robinhood is more sensitive to mdc parameters than osc parameters.
# example of tunings: lctl set_param llite.*.statahead_max=4 lctl set_param ldlm.namespaces.*.lru_size=100 lctl set_param ldlm.namespaces.*.lru_max_age=1200 lctl set_param mdc.*.max_rpcs_in_flight=64 sysctl -w lnet.debug=0
Make sure ko2iblnd peer_credits is enough to handle the specified max_rpcs_in_flight, and that the same parameter on MDS is set accordingly.
Changelogs feature of Lustre (>=2.0) allows robinhood receiving notifications of changes in a Lustre filesystem. Once you have initially populated robinhood database by running an initial scan, you don't need to run other scans to update the database contents. You just need to make robinhood listen to the changelog, so its database is updated near real-time.
To enable this feature, you need to register a changelog reader on Lustre MDS:
- Run rbh-config enable_chglogs ''fsname'' on the MDS before the initial filesystem scan (you can also use 'lctl changelog_register' command instead).
The reader is registered persistently. However, the changelog mask must be set when restarting the MDS (prior to any filesystem operation).
- If you have multiple MDT (DNE), you need to register 1 changelog reader per MDT.
- Remember the returned reader id (e.g. cl1) for each MDT. You will need to set it in robinhood configuration.
- In Robinhood configuration, set changelog parameters in a changelog block:
ChangeLog { # if you have multiple MDT (DNE) # define one MDT block per MDT MDT { mdt_name = "MDT0000"; reader_id = "cl1"; } }
Lustre/HSM feature allows migrating data from a Lustre tier0 level to an external storage backend (tier1).
Requirements:
- Lustre >= 2.5 on both Lustre servers and clients running robinhood or copy agents.
- Enable hsm coordinator on MDS (see Lustre manual)
- Run a specific copytool for your backend (lhsmtool_posix, lhsmd_hpss, lhsmtool_dmf, ...) on some lustre clients.
- Enable Lustre MDT changelogs to allow robinhood receiving HSM events (see Enabling Lustre v2 changelogs).
By default, robinhood consider configuration files /etc/robinhood.d/*.conf
- For a single filesystem, you can write a configuration file like /etc/robinhood.d/''myfs''.conf. In this case, you don't have to specify a configuration file in robinhood command line. You can simply run commands like:
robinhood --scan rbh-find -name foo rbh-report -i
- If you want to manage several filesystem on the same host, you must write 1 configuration file per filesystem, for example: /etc/robinhood.d/''myfs1''.conf, /etc/robinhood.d/''myfs2''.conf. In this case, you have to specify a configuration name in robinhood command line:
robinhood -f myfs1 --scan rbh-find -f myfs2 -name foo rbh-report -f myfs1 -i
- If your configuration is located in a different directory, you need to specify the full path on command line, e.g.:
robinhood -f /home/foo/test.conf --scan
Robinhood RPM installs templates of configuration in /etc/robinhood.d/templates directory. They provide examples of various policy applications.
The following example is the minimal configuration file to run robinhood:
General { fs_path = "/path/to/fs"; fs_type = fstype; # eg. lustre, ext4, xfs... } Log { log_file = "/var/log/robinhood/myfs.log"; report_file = "/var/log/robinhood/myfs_actions.log"; alert_file = "/var/log/robinhood/myfs_alerts.log"; } ListManager { MySQL { server = db_host; db = robinhood_test; user = robinhood; password_file = /etc/robinhood.d/.dbpassword; } } # Lustre 2.x only ChangeLog { MDT { mdt_name = "MDT0000"; reader_id = "cl1"; } }
For more details about these parameters and other possible options, refer to Configuration Reference.
If scanning your filesystem is expected to be a long operation (e.g. millions of entries), you have to think about several things before you run the initial scan. Else, you may need to scan again to take configuration changes into account.
So before you run the first scan it is recommended to:
- Define the fileclasses you want to appear in reports or use in policy rules (see Fileclasses).
- Include the policy declarations for policies you want to run on the filesystem (see Policy Declarations).
- If you plan to use Lustre changelog, activate them before the scan. Else you could miss some filesystem changes.
To populate the DB, we have to run an initial scan. Unlike scanning in daemon mode, we just want to scan once and exit. So we run:
robinhood --scan --once
- If your filesystem contains millions of entries, the scan may take a while... In this case you can detach the scan from your terminal by specifying '--detach':
robinhood --scan --once --detach
- If you want to override configuration values for log file, use the '-L' option. For example, you can specify '-L stderr'
robinhood -L stderr --scan --once
You get something like this:
2015/11/13 14:08:02 FS_Scan | Starting scan of /mnt/lustre 2015/11/13 14:30:03 FS_Scan | Full scan of /mnt/lustre completed, 7130542 entries found (0 errors). Duration = 22.01min 2015/11/13 14:30:04 Main | FS Scan finished
If you filesystem don't have the changelog feature (non-Lustre filesystem, or Lustre 1.x) you will need to scan it regularly to update robinhood database.
To set up regular scan:
- Specify a '--scan' option in /etc/sysconfig/robinhood (or if you manage multiple filesystems on RHEL7: /etc/sysconfig/robinhood.''fsname''):
RBH_OPT="--scan"
- Specify a scan interval in robinhood configuration (e.g. /etc/robinhood.d/myfs.conf):
fs_scan { scan_interval = 1d; }
- Run robinhood daemon:
RHEL 5/6: service robinhood start RHEL 7: systemctl start robinhood # or if you manage multiple filesystems: systemctl start robinhood@fsname
The deamon will run a full filesystem scan. Once finished, it will wait during the given scan_interval before it starts a new scan.
For the detail of fs_scan parameters, see Configuration Reference: fs_scan.
To start a robinhood daemon to run changelog continuously:
- Make sure you enabled changelog feature of Lustre, and written a 'Changelog' block in robinhood configuration. In not, refer to this section: Enabling Lustre v2 changelogs.
- Specify the '--readlog' option in /etc/sysconfig/robinhood (or if you manage multiple filesystems on RHEL7: /etc/sysconfig/robinhood.''fsname''):
RBH_OPT="--readlog"
- Run robinhood daemon:
RHEL 5/6: service robinhood start RHEL 7: systemctl start robinhood.service # or if you manage multiple filesystems: systemctl start robinhood@fsname
The deamon will continuously read changelog records from the filesystem and update its database accordingly.
Analyzing robinhood pipeline statistics is a good start to determine what the current performance limitation is: see pipeline stats.
See database tunings above.
Fileclasses are arbitrary set of entries. They are defined using criteria on entry attributes.
Example of fileclass definition:
fileclass big_log { definition { name == “*.log” and size > 100MB } }
- This feature can be used to categorize filesystem entries, thus allowing to get an overall report of filesystem logical contents.
In the following example, we defined 5 fileclasses A, B, small_files, std_file, big_files. The report indicates the amount of entries in the fileclasses and their intersections:
> rbh-report --class-info fileclass , count, spc_used, volume, min_size, max_size, avg_size A+big_files, 16, 411.91 GB, 411.91 GB, 1.06 GB, 232.89 GB, 25.74 GB A+small_files, 488410, 48.69 GB, 47.61 GB, 1, 15.98 MB, 102.22 KB A+std_files, 1144, 102.60 GB, 102.60 GB, 16.01 MB, 1000.00 MB, 91.84 MB B+big_files, 8, 33.46 GB, 33.47 GB, 2.16 GB, 6.60 GB, 4.18 GB B+small_files, 64078, 11.90 GB, 11.63 GB, 2, 15.98 MB, 190.29 KB B+std_files, 629, 82.93 GB, 82.93 GB, 16.06 MB, 848.40 MB, 135.02 MB
- Fileclasses can be used to define policy targets in policy rules, e.g.
mypolicy_rule myrule { target_fileclass = big_log; ...
- A fileclass can be defined as the union, intersection or negation of several other fileclasses using inter, union and not operators.
# images that are not in class1 or class2 fileclass other_images { definition { images inter not (class1 union class2) } }
- Policy action parameters can be specified per fileclass. They can be defined in a policy_action_params sub-block (where policy is the name of a policy). e.g.
fileclass system_logs { definition { owner == root and ( name == "*.log" or name == "*.log.gz" ) } # action parameters for policy 'mypolicy' and 'system_logs' fileclass mypolicy_action_params { priority = low; qos = 31; } }
Robinhood v3 allows scheduling any kind of actions on filesystem entries.
New policies can be defined at will, just by writing a few lines of configuration. Policy specification is very flexible and allows configuring all behaviors, executed actions and their parameters.
Policies can rely on plugins to implement specific needs (status management, external interaction, specific actions...).
A policy is defined by the following elements. These elements are described in more details in the next sections.
- A policy declaration that specifies:
- A custom name for the policy. This sets up all robinhood resources to manage the policy (loading related configuration blocks, enabling the policy in command line options, ...).
- A status manager (optional) to manage the status of entries regarding this policy.
- A policy scope that statically defines the set of entries that must be considered for a given policy (type, status...). Unlike policy rules that can be changed at will, the policy scope is not supposed to change.
- Policy specific behaviors including:
- Default action for the policy.
- Default sort order to apply the policy.
- Templates of policy declarations are provided with robinhood package, so you can include them in your config file to implement legacy policies.
-
Policy parameters is a set of configurable behaviors for a policy:
- Policy action (overrides default action from policy declaration).
- Policy sort order (overrides default sort order from policy declaration).
- Default action parameters.
- A maximum number of actions (or volume) per policy run, or a rate limitation.
- Schedulers that allow reordering or delaying action executions.
-
Policy rules:
- They indicate what entries must be ignored for the policy.
- For each fileclass, they define a condition to apply the policy.
- They can override the default policy action.
- They can override and/or complete default action parameters.
-
Policy triggers that specify:
- Conditions to start policy runs (e.g. at regular interval, if filesystem usage is over a high threshold, ...), and an optional set of resources to be checked (users, groups, OST pools...). Triggers are specified in the configuration file.
- A maximum number of actions (or volume) per policy run, or a rate limitation.
Policy declaration sets up all robinhood resources to manage a policy: loading related configuration blocks, enabling the policy in command line options, checking if entries are in the policy scope and checking entries status for the policy...
Defining a new policy in done by writing a define_policy configuration block.
The following example defines a simple "restripe" policy for Lustre:
define_policy restripe { status_manager = basic; scope { type == file and status != 'ok' } default_action = cmd("lfs migrate -c {new_stripe} {fullpath}"); default_lru_sort_attr = last_access; }
In this example:
- basic is a simple status manager provided by robinhood. It implements a set of 3 status: '' (not set), 'ok' (policy action successful) and 'failed' (policy action failed).
- The given scope indicate that the policy should apply to files that have not been restriped already.
- Default action is a external "lfs" command with variable arguments new_stripe and fullpath.
- By default, the policy applies to least recently accessed files first.
- Policy name is an arbitrary name without special characters.
-
status_manager is the name of a status manager plugin. See Plugins documentation for the list of available status managers.
- It can be set to none if no entry status is to be managed for the policy.
- It the status manager manages several types of actions, the implemented action must be specified as status_manager parameter (e.g. status_manager = lhsm(archive);).
- If the policy manages deleted entries, removed or deleted must be specified as status_manager parameter (e.g. status_manager = lhsm(removed);).
-
scope can be:
- a boolean expression matching entry attributes, policy 'status', or policy-specific criteria.
- scope = '''all''' indicates that the policy can possibly apply to all entries in the filesystem (all entries are matched against policy rules).
- default_action: see Actions section for details about specifying policy actions.
- default_lru_sort_attr: specifies the default sort order to apply the policy. It can be a time attributes (e.g. 'last_access'), or none for no sorting (better policy application performance).
For common use cases, you don't need to define policies like in the previous section: Robinhood provides policy declaration templates for common uses cases and legacy policies (of robinhood 2.x). These templates are installed in /etc/robinhood.d/includes directory. To use them, simply include the desired template in your robinhood configuration file, e.g.:
# this defines policies for Lustre/HSM %include "includes/lhsm.inc"
# this defines a 'cleanup' policy to delete old unused files (legacy tmpfs' purge) %include "includes/tmpfs.inc"
Note that it is not recommended to modify directly the policy templates. If you want to modify the behavior of a policy, it is better to change parameters in the related policy_parameters block (see Policy parameters).
The following templates are installed with robinhood:
- includes/alerts.inc: defines 'alert' policy to raise alert about entries matching some defined criteria.
-
includes/backup.inc: defines policies to archive a Lustre 2.x filesystem to a POSIX backend (implements robinhood legacy mode 'backup'). This does not rely on Lustre/HSM feature.
- 'backup_archive' policy to archive files (stands for 'migration' policy of legacy 'backup' mode).
- 'backup_remove' policy to clean deleted entries from the archive (stands for 'migration' policy of legacy 'backup' mode).
-
includes/lhsm.inc: defines policies for Lustre/HSM (robinhood legacy mode 'lhsm'):
- 'lhsm_archive' policy controls file archiving to Lustre/HSM backend ('migration' policy of legacy 'lhsm' mode).
- 'lhsm_release' policy releases disk space in Lustre/HSM ('purge' policy of legacy 'lhsm' mode).
- 'lhsm_remove' policy controls the cleaning of deleted entries in the Lustre/HSM backend ('hsm_rm' policy of legacy 'lhsm' mode).
- includes/rmdir.inc: defines 'rmdir_empty' and 'rmdir_recurse' policies (these policies replace the legacy 'rmdir' policy of 'tmpfs' mode).
- includes/tmpfs.inc: defines 'cleanup' policy to delete old unsed entries from a filesystem ('purge' policy of legacy 'tmpfs' mode).
Policy specific behaviors can be specified in a policy_parameters block, where policy stands for the policy name.
In this block, you can:
- Specify the number of thread to run the policy (i.e. match entries and execute policy actions) by setting the nb_threads parameter.
- Limit the max number of executed actions per run (or volume of impacted entries) by setting max_action_count and/or max_action_volume parameters.
- Override the default sort order for the policy by setting lru_sort_attr parameter.
- Override the default action for the policy by setting action action parameter.
- Specify parameters for policy actions (see Action parameters section below), in an action_params sub-block.
- Define thresholds to suspend a policy run if too many error occur: suspend_error_pct and suspend_error_min parameters.
- Make policy application more intensive before a maintenance (pre_maintenance_window and maint_min_apply_delay parameters).
- Specify logging/reporting options during policy runs (report_interval, report_actions, ...)
- Specify one or several schedulers, by specifying a schedulers parameter. It expects a coma-separated list of schedulers (e.g. common.max_per_run,common.rate_limit). Schedulers can have specific parameters that are specified in a sub-block of policy parameters, having the same name as the scheduler.
- Specify the attributes that must be updated and checked by the policy rules before and after scheduling, pre_sched_match and post_sched_match. Possible values are:
- "none": no attribute matching
- "cache_only": match attributes from DB
- "auto_update": update attributes needed by the policy before matching
- "force_update": update all attributes before matching
- You can configure commands to be executed before/after each policy run using pre_run_command and post_run_command parameters. Notice that a policy run is aborted if the pre_run_commmand fails.
- Specify various other behaviors: see Configuration Reference: Policy parameters for the full list of policy parameters.
# define parameters for 'mypolicy' mypolicy_parameters { nb_threads = 4; # override default action for 'mypolicy' action = cmd("/usr/bin/run_it.sh -a {myarg1} -f {fullpath}"); # default parameters for action (can be overridden in policy rules) action_params { myarg1 = foo; } schedulers = common.rate_limit; rate_limit { # max 1000 and 100GB per 10 second timeframe max_count = 1000; max_size = 100GB; period_ms = 10000; } pre_sched_match = cache_only; post_sched_match = auto_update; }
Policy rules specify what action is to be taken on which entries, and when. Policy rules are specified in a policy_name_rules block in robinhood configuration file. (Note that *_rules blocks were called *_policies in robinhood v2.x. They have been renamed to avoid any confusion with policy definitions. The same way, policy foo sub-blocks have been renamed to rule foo).
The following configuration block gives an example of various policy rules for a policy mypolicy:
mypolicy_rules { # don't process entry of fileclass 'blacklisted' and 'waste' ignore_fileclass = blacklisted; ignore_fileclass = waste; # don't process entries in the namespace tree under /fs/debug: ignore { tree == "/fs/debug" } # simple policy rule rule myrule1 { target_fileclass = oneclass; target_fileclass = anotherclass; # time condition to run policy actions on entries # of these classes condition { last_access > 1d } } # policy rule overriding default action of mypolicy rule myrulename2 { target_fileclass = classX; target_fileclass = classY; action = cmd("/usr/local/bin/do_something.sh -a {arg} -c {fileclass} {fullpath}"); condition { last_access > 1d } } # policy rule overriding default action parameters of mypolicy rule myrule3 { target_fileclass = classZ; target_fileclass = classAB; action_params { arg = 42; } condition { last_access > 1w } } # default policy rule (optional): applies to entries that don't # match previous target fileclasses: rule default { action = cmd("/usr/local/bin/do_otherthing.sh -a {arg} {fullpath}"); condition { last_access > 4d } } }
Formally:
policy_rules block (where policy stands for the policy name) consists of:
- ignore_fileclass statements that indicates fileclasses to ignore for this policy
- ignore blocks with arbitrary criteria on entries to ignore for this policy
-
rule rulename sub-blocks. Each of them consists of:
- one or several target_fileclass.
- optional action to override policy default (see Actions below for details about specifying policy actions).
- optional action_params sub-block to complete or override default actions parameters (see Action Parameters below for more details).
- optional rule default sub-block: fallback rule for entries that don't match targets of previous rules.
mypolicy_rules { rule rule1 { target_fileclass = A; ... } rule rule2 { target_fileclass = B; ... } }
However, 'ignore' and 'ignore_fileclass' statements are always priority over policy rules, wherever they are defined in the policy_rules block.
Policy actions are fully configurable in robinhood v3.
- An action can refer to a function implemented by a plugin (we will call them "embedded" actions). In this case, it is specified as plugin_name.action_name, e.g.:
action = common.unlink; action = lhsm.archive;
Each action can implement specific parameters. Refer to section Action Parameters to see how to specify action arguments. See the specific documentation of plugins for details about supported parameters.
- An action can also be specified as a command line, possibly using placeholders for its arguments. In this case it is specified as cmd("command_line"), e.g.:
action = cmd("rm -rf {fullpath}"); action = cmd("lfs migrate -c {new_stripe_count} {fsroot}/.lustre/fid/{fid}");
For more details about action arguments and placeholders, refer to section Action Parameters.
- Action can be set to none (no operation is actually performed) e.g. action = none.
- In policy declaration (default_action parameter)
- In policy parameters (action parameter): action specified in policy parameters overrides default_action specified in policy_declaration.
- In a policy rule (action parameter): action specified in a policy rule overrides actions specified in policy declaration or policy parameters.
COMING SOON: EXAMPLE OF MULTI-LEVEL ACTION DEFINITION
Action parameters are defined as list of key/values passed to policy actions, and can be used to build command line arguments.
They kind of replace "hints" parameters of robinhood v2, and make them more flexible.
Action parameters are specified in action_params sub-blocks. They can be defined at multiple levels, thus allowing a high flexibility in specifying actions parameters:
- in policy parameters,
- in triggers,
- in policy rules,
- in fileclasses (in this case, sub-block is called policy_name_action_params to distinguish the parameters of different policies).
Example:
lmigr_parameters { # default parameters for 'lmigr' policy actions action_params { pool = pool1; stripe_count = 4; } } lmigr_rules { # this rule use default action_params from lmigr_parameters rule migr_foo { target_fileclass = foo; condition { last_mod > 1d } } # This rule overrides "pool" parameter, # keep the default for stripe_count (4), # and define an additional "stripe_size" parameter rule migr_bar { target_fileclass = bar; action_params { pool = pool2; stripe_size = 4MB; } condition { last_mod > 1d } } }
Note: action parameters are only inherited from the fileclass that first matches a "target_fileclass" statement. In the example above, if an entry matches both foo and bar fileclasses, it will inherit action_params from foo.
Action parameters values can include placeholders for special values. These placeholders are replaced in the action context:
- {name}: name of the entry.
- {path} or {fullpath}: full path of the entry.
- {fid}: filesystem identifier of the entry.
- {ost_pool}: OST pool where the entry is stored (Lustre only).
- {fsname}: filesystem name (from /etc/mtab).
- {fsroot} or {fspath}: path of the filesystem managed by robinhood ('general::fs_path' config parameter).
- {cfg}: path of robinhood configuration file.
- {rule}: name of the matching policy rule.
- {fileclass}: matched fileclass in the policy rule.
action_params { handler = "/hpc-fs/.lustre/fid/{fid}"; why = "rule={rule},class={fileclass},rbh_cfg={cfg}"; src_pool = "{ost_pool}"; }
Notes:
- It is possible to build JSON in parameter values. For example, such a string is properly processed by robinhood params engine:
json = '{"id":"{fid}","src_pool":"{ost_pool}"}';
- It is possible to reference an action param in another action param, but only if it does not contain a placeholder itself.
# This is allowed: bar = "xyz"; foo = "bla bla {bar}";
# This NOT recommended (undetermined result): bar = "bla {name}"; foo = "bla bla {bar}"; # Even worse (cross reference): foo = "bla bla {bar}"; bar = "xyz {foo}";
- User-defined parameters ('action_params') take priority over special parameters listed above. This means you can override the value of special parameters with your own value. This is however quite confusing, so... not recommended.
action_params { path = "foo"; }
In this case, action = cmd("/usr/bin/doit.sh {path}"); will get 'foo' as argument instead of actual entry path(!)...
For "embedded" actions:
Embedded policy actions (functions defined in plugins) each support a given set of parameters that can be different from one function to another. Refer to the documention of these functions (in plugins doc) to know what parameters they expect.
Examples:
- Action lhsm.archive allows specifying an archive_id parameter to indicate the target archive system.
- Action common.copy expects as targetpath parameter to indicate the target copy path.
- You can refer to any parameter name between braces in an action command line. This placeholder is replaced by the parameter value when the command is executed.
- You can also use any of the special placeholders listed above.
action = cmd("lfs migrate -c {new_stripe} {fullpath}"); action_params { new_stripe = 2; }
In the example above, {new_stripe} is replaced by the specified value for action parameter new_stripe, i.e. "2". And {fullpath} is replaced by the full path of the entry that is being processed by the policy. So the executed command for an entry '/fs/foo/bar' is: lfs migrate -c '2' '/fs/foo/bar'.
Once a policy has been defined in configuration, let's see how to run it.
Triggers define conditions to automatically start a policy run when a particular condition is met while robinhood runs as a daemon. Triggers can determine an amount of entries to process, and target a subset of filesystem entries.
In v3.0, the following trigger types are implemented:
- periodic/scheduled: policy is run at regular interval (no particular condition).
- global_usage: policy run is started when the overall filesystem usage (df) exceeds the configured threshold.
- user_usage: policy run is started when the usage of a given user exceeds the configured usage threshold (count or volume). Policy then applies only to this user's files.
- group_usage: policy run is started when the usage of a given group exceeds the configured usage threshold (count or volume). Policy then applies only to this group's files.
- ost_usage: policy run is started when an OST usage (lfs df) exceeds the configured threshold. Policy then applies only to files stored on this OST.
- pool_usage: policy run is started when the pool usage exceeds the configured threshold. Policy then applies only to files of this pool.
Triggers are defined by policyname_tigger blocks. You can have several trigger blocks for a policy. Trigger blocks can specify the following parameters:
-
trigger_on = trigger_type[(args)] with trigger type is one of the types listed above (global_usage, user_usage...). Some trigger types allow optional arguments:
- user_usage(user_pattern1,user_pattern2,...) (for example: user_usage(user*,foo,bar)): check usage only for the given user login patterns.
- group_usage(group_pattern1,group_pattern2,...) (for example: user_usage(group*,project1,foogrp)): check usage only for the given group name patterns.
- pool_usage(pool1,pool2,...): check usage only for the given pool names (no pattern allowed).
- check_interval: interval for checking trigger condition (or running the policy in the case of scheduled trigger).
- high_threshold_{cnt|pct|vol}: threshold to trigger the policy run ('cnt' for entry count, 'pct' for disk usage percentage, 'vol' for used volume).
- alert_high = {yes|no}: raise an alert (like sending a mail) when the high threshold is reached.
- low_threshold_{cnt|pct|vol} = value: threshold to stop the policy run ('cnt' for entry count, 'pct' for disk usage percentage, 'vol' for used volume).
- alert_low = {yes|no}: raise an alert (like sending a mail) when the low threshold could not be reached by applying the policy.
- max_action_{count|volume} = value: maximum number of actions to be executed per policy run started by this trigger (or the maximum volume of impacted files).
- action_params { key = val; key = val; ...}: define specific action parameters when the policy run is started by this trigger (See action parameters for more details).
- For the detailed list of possible trigger parameters, see: Configuration Reference: triggers.
# trigger cleanup policy if any user matching 'foo*' exceeds 100K entries. cleanup_trigger { trigger_on = user_usage(foo*); high_threshold_cnt = 100K; low_threshold_cnt = 95K; alert_high = yes; }
# trigger lhsm_release policy if any OST exceeds 85% of disk usage. lhsm_release_trigger { trigger_on = ost_usage; high_threshold_pct = 85%; low_threshold_pct = 80%; }
Checking triggers:
- Continuously check triggers for a policy and apply the policy if necessary (daemon mode):
- Check triggers once, apply the policy if necessary, then exit:
- Check triggers conditions once, but do not apply the policy (just report if thresholds are exceeded):
Manual policy runs:
The following commands allow triggering policy runs manually. These commands do not rely on triggers and do not require triggers to be defined.
General syntax is: robinhood --run="''policyname''(''args'')" where args is a comma-separated list of parameters.
The following parameters are supported:
-
'''target'''=''tgt'': specifies a target subset of entries to run the policy on
where tgt can be:- all : run on all filesystem entries (matching the policy scope).
- user:''username'' : run on entries of the given user.
- group:''groupname'' : run on entries of the given group.
- file:''path'' : run on the given entry.
- class:''fileclass'' : run on entries in the given fileclass.
- ost:''ost_idx'' : run on entries stored on the given OST index.
- pool:''poolname'' : run on entries in the given OST pool.
- '''max-count'''=''nbr'' Max number of actions to execute for a policy run.
- '''max-vol'''=''size'' Max volume of entries impacted by a policy run.
- '''target-usage'''=''pct'' Targeted filesystem or OST usage for a policy run (in percent). Example: if current usage is 80% and target usage is 75%, the policy will run on a volume of entries equivalent to 5% of the filesystem (or 5% of the OST if target is an OST).
e.g. --run=mypolicy(user:foo), --run=mypolicy(all,max-count=10K)...
Command examples:
- Run lhsm_archive policy on all entries of user foo:
robinhood --run="lhsm_archive(target=user:foo)"
- Run lhsm_release policy on ost#25 until space used on OST is 70%:
robinhood --run="lhsm_release(target=ost:25,target-usage=70%)"
- Run mycopy policy on all filesystem entries. Stop after copying 10TB of data or 100K entries:
robinhood --run="mycopy(all,max-vol=10TB,max-count=100K)"
Variations:
- You can run multiples policies in a single command:
- When running a single policy, you can specify target and target-usage arguments as standard options:
- When running multiple policies, you can use the same options for all policies:
rbh-report provides various reports about filesystem contents: overall filesystem info, stats per user, per group, per fileclass, per policy status...
Some examples of commands and their output:
- Filesystem summary (fsinfo):
> rbh-report -f myfs --fsinfo type, count, volume, spc_used, avg_size symlink, 1, 67, 4.00 KB, 67 dir, 11532, 82.86 MB, 83.29 MB, 7.36 KB file, 356382, 598.44 TB, 595.75 TB, 1.72 GB Total: 367915 entries, volume: 657991980128673 bytes (598.44 TB), space used: 655029362610176 bytes (595.75 TB)
- Fileclasses summary (classinfo):
> rbh-report -f scratch --classinfo fileclass, count, volume, spc_used, min_size, max_size, avg_size BigFiles, 17034, 264.98 TB, 263.28 TB, 8.04 GB, 76.56 GB, 15.93 GB BiggerFiles, 369, 37.94 TB, 37.52 TB, 80.83 GB, 450.47 GB, 105.28 GB EmptyFiles, 201, 0, 0, 0, 0, 0 SmallFiles, 112752, 420.72 GB, 417.45 GB, 3, 16.00 MB, 3.82 MB StdFiles, 228655, 255.49 TB, 255.01 TB, 16.06 MB, 8.00 GB, 1.14 GB Total: 367915 entries, volume: 657991980128673 bytes (598.44 TB), space used: 655029363482624 bytes (595.75 TB)
- Status summary (statusinfo):
> rbh-report -f archive --statusinfo=lhsm lhsm.status, type, count, volume, spc_used, avg_size new, file, 1488, 812.19 GB, 817.01 GB, 558.93 MB synchro, file, 356391, 598.49 TB, 595.80 TB, 1.72 GB released, file, 5625480, 54.30 PB, 0, 1.01 GB ...
- All entry information (--entryinfo):
> rbh-report -e /mnt/lustre/robinhood-3.0/src/robinhood/rbh_find.c id : [0x200000400:0x1a679:0x0] parent_id : [0x200000400:0x1a672:0x0] name : rbh_find.c path updt : 2016/09/15 13:30:01 path : /mnt/lustre/robinhood-3.0/src/robinhood/rbh_find.c depth : 3 user : root group : root size : 54.71 KB spc_used : 56.00 KB creation : 2016/09/15 13:28:11 last_access : 2016/09/15 13:28:09 last_mod : 2016/09/09 16:08:20 last_mdchange : 2016/09/15 13:28:11 type : file mode : rw-r--r-- nlink : 1 md updt : 2016/09/15 13:30:03 invalid : no stripe_cnt, stripe_size, pool: 2, 1.00 MB, stripes : ost#0: 19140, ost#1: 19136 lhsm.status : synchro lhsm.archive_id: 1 lhsm.no_release: no lhsm.no_archive: no lhsm.last_archive: 2016/09/15 13:28:39 lhsm.last_restore: 0
- Display user summary (userinfo) + its file size profiles (szprof) :
> rbh-report -u foo --szprof user, type, count, volume, spc_used, avg_size, 0, 1~31, 32~1K-, 1K~31K, 32K~1M-, 1M~31M, 32M~1G-, 1G~31G, 32G~1T-, +1T foo, symlink, 206, 8.62 KB, 40.00 KB, 43, 0, 55, 151, 0, 0, 0, 0, 0, 0, 0 foo, dir, 158, 648.00 KB, 648.00 KB, 4.10 KB, 0, 0, 0, 158, 0, 0, 0, 0, 0, 0 foo, file, 1104, 8.70 MB, 11.39 MB, 8.07 KB, 16, 9, 358, 663, 58, 0, 0, 0, 0, 0
And much more reports...
Run man rbh-report or rbh-report --help for a detailed list of reports and options.
rbh-find is a find-like command, much faster as is it based on robinhood's database (a database is much more adapted for performing queries based on criteria, than a filesystem which is more adapted for IOs). If you are using Lustre v2 Changelogs, you will get an even more fresh result as the database is fed in soft real-time, whereas the long time of running standard 'find' results in an out-of-date result at the end. Note this command may require an access to the filesystem (not only a connection to the DB).
See man rbh-find or rbh-find --help for more details.
rbh-du is an enhanced implementation of the standard 'du' command. It queries the robinhood DB instead of scanning the filesystem, which results in faster execution.
Compared to "du" it provides the following additional features:
- filtering per user (-u option), per group (-g option) or per type (-t option)
- more detailed outputs (-d option): display entry count, size and disk usage per type.
See man rbh-du or rbh-du --help for more details.
rbh-diff command performs a filesystem scan and displays differences with the information currently stored in robinhood database. Optionally, it can apply those changes to the database or revert the detected changes in the filesystem. It can be used:
- For disaster recovery purpose: after restoring an outdated version of filesystem metadata from a snapshot, rbh-diff can restore the metadata changes that occurred after the snapshot time.
- If you are just curious about what changed in the filesystem since the last robinhood scan.
- For debugging purpose: to detect robinhood database inconsistencies.
If you run an archive policy (e.g. 'lhsm_archive'), rbh-undelete command allows you to recover archived entries that would be accidentally removed from the filesystem Robinhood managed.
rbh-undelete can list the entries that can be recovered for a given directory or the whole filesystem. Example:
> rbh-undelete --list /mnt/lustre/dir.1 rm_time, id, type, user, group, size, last_mod, lhsm.status, path 2017/06/22 16:08:04, [0x200000401:0x3e9:0x0], file, root, root, 0, 2017/06/22 16:06:49, synchro, /mnt/lustre/dir.1/file.1 2017/06/22 16:08:04, [0x200000401:0x3f2:0x0], file, root, root, 0, 2017/06/22 16:06:49, synchro, /mnt/lustre/dir.1/file.10 2017/06/22 16:08:04, [0x200000401:0x3f3:0x0], file, root, root, 0, 2017/06/22 16:06:49, synchro, /mnt/lustre/dir.1/file.11
# restore a single file > rbh-undelete -R /mnt/lustre/dir.1/file.1 Restoring '/mnt/lustre/dir.1/file.1'... restore OK (file)
# restore all deleted files in a given directory > rbh-undelete -R /mnt/lustre/dir.1 Restoring '/mnt/lustre/dir.1/file.10'... restore OK (file) Restoring '/mnt/lustre/dir.1/file.11'... restore OK (file)
A major change in robinhood v3 is its plugin-based architecture. Policy-specific aspects (status managers, actions) have been moved out of robinhood core and are now managed as plugins installed in /usr/lib64/robinhood/. Plugins allows new usages, new modes of distribution, and allow addressing site-specific needs by implementing plugins that can be easily integrated to robinhood:
- Robinhood is shipped with plugins for common and legacy usages (Lustre/HSM policies, cleaning of scratch filesystems, backup, ...).
- Vendors can implement and distribute their own plugins for robinhood, to provide specific features to their customers, and optimizations for their specific hardware/software stack.
- Advanced users can implement they own plugins to address their site-specific needs.
This section describes the plugins shipped with robinhood v3.0.
"common" plugin implements the following actions:
- common.unlink: remove an entry (man 2 unlink). No specific action parameter is required. Robinhood must known the path of the entry to perform it.
- common.rmdir: remove an empty directory (man 2 rmdir). No specific action parameter is required. Robinhood must known the path of the entry to perform it.
-
common.copy: Copy a file from its current location to a targetpath.
- targetpath: mandatory action parameter. Path to the copy target.
- nosync: optional parameter. Don't flush the copy to disk after each copy (higher performance, possible data loss if system crashes).
- compress: optional parameter. Compress the copy (behaves as 'common.gzip').
-
common.sendfile: Copy a file from its current location to a targetpath using sendfile() system call (man 2 sendfile).
- targetpath: mandatory action parameter. Path to the copy target.
- nosync: optional parameter. Don't flush the copy to disk after each copy (higher performance, possible data loss if system crashes).
-
common.gzip: Copy a file from its current location to a targetpath with gzip compression.
- nosync: optional parameter. Don't flush the copy to disk after each copy (higher performance, possible data loss if system crashes).
-
common.log: just display a line in the robinhood log like:
"LogAction | fid=entry_id, path=entry_path, params={action_params}".
lhsm plugin provides a status manager and specific actions for Lustre/HSM.
It is only available for Lustre filesystems >= 2.5.
You can find examples of lhsm-based policies in /etc/robinhood.d/templates.
lhsm status manager manages the following set of statuses:
- '' (empty): the status of the entry is not determined.
- new: entry in Lustre filesystem has never been archived.
- modified: entry has been copied to the archive but it has been modified since then (data in the filesystem is newer than in the archive).
- synchro: entry has been copied to the archive and has not been modified since then (data in the filesystem is the same as in the archive).
- released: data has been released from Lustre (entry data is only present in the archive).
- archiving: file is being copied to the archive.
- retrieving: data is being restored from the archive.
lhsm status manager maintains the following policy-specific attributes:
- lhsm.archive_id: last archive id where the file has been copied to.
- lhsm.no_release: indicate if the 'norelease' flag has been set for this entry (lfs hsm_set --norelease).
- lhsm.no_archive: indicate if the 'noarchive' flag has been set for this entry (lfs hsm_set --noarchive).
- lhsm.last_archive: last time the entry was archived.
- lhsm.last_restore: last time the entry was restored.
- lhsm.uuid: identifier of the entry in the archive backend. This is the value of an extended attribute stored in Lustre (see parameter uuid::xattr below).
lhsm plugin provides the following actions:
-
lhsm.archive: archive an entry to the backend.
- archive_id: optional action parameter. Indicates the target 'archive_id' where the file is to be archived.
- other action parameters: all other action parameters are passed to the copytool that performs the 'archive' operation, as a comma-separated list of "key=value" (like "lfs hsm_archive --data 'k1=v1,k2=v2...' <file></file>").
- lhsm.release: release entry data from the filesystem (entry must be 'synchro').
- lhsm.remove: remove an entry in the backend after it has been removed from Lustre.
Specific configuration for lhsm can be specified in a lhsm_config block.
The following parameters are supported:
- rebind_cmd: this command is needed to perform undelete operation on a Lustre/HSM setup.
- uuid::xattr: indicate the extended attribute the copytool uses to store the entry identifier in Lustre.
lhsm_config { # copytool-specific rebind_cmd = "lhsmtool_posix --archive={archive_id} --rebind {oldfid} {newfid} {fsroot}"; # for UUID-based mapping uuid { xattr = "trusted.lhsm.uuid"; } }
backup plugin provides a status manager and specific actions implementing legacy 'robinhood-backup'.
backup status manager manages the following set of statuses:
- '' (empty): the status of the entry is not determined.
- new: entry in Lustre filesystem has never been archived.
- modified: entry has been copied to the archive but it has been modified since then (data in the filesystem is newer than in the archive).
- synchro: entry has been copied to the archive and has not been modified since then (data in the filesystem is the same as in the archive).
- archiving: file is being copied to the archive.
backup status manager maintains the following policy-specific attributes:
- backup.backend_path: path of the archive copy in external storage.
- backup.last_archive: last time the entry was archived.
This plugin does not implement or provide specific actions.
backup policies only rely on standard actions, like "common.copy".
Specific parameters for backup status manager must be defined in a backup_config block.
Example:
backup_config { root = "/backend"; mnt_type = nfs; check_mounted = yes; copy_timeout = 6h; #recovery_action = common.copy; #default }
basic status manager is a simple status manager that implements 2 statuses: 'ok' and 'failed'. Entry status is set according to the status of policy actions run on this entry. This status manager can be used to implement simple policies.
basic status manager implements the following set of statuses:
- '' (empty): the status of the entry is not set (no action has been run on the entry).
- ok: policy action ran successfully on the entry.
- failed: policy action failed for this entry.
define_policy do_smthg { status_manager = basic; # process files that have never been checked before scope { type == file and status == "" } default_lru_sort_attr = last_mod; # default action default_action = cmd("/usr/bin/process_file.sh {path}"); }
Purpose of alerter status manager is to check entry attributes at regular interval and raise an alert if they match defined rules.
You can find examples of alert policy in /etc/robinhood.d/templates.
Entries can have the following statuses regarding alerts:
- '' (empty): the entry has never been checked.
- clear: the entry has been checked and doesn't match an alert rule.
- alert: the entry has been checked and matches an alert rule.
It maintains the following specific attributes for filesystem entries, useful for alert management:
- alerter.last_check: last time the entry was checked.
- alerter.last_alert: last time an alert was raised for the entry.
Whatever the alert action you specify, alert rules must specify an alert action parameter with the following value:
- raise: entry status is set to alert if the policy action succeeds. last_alert and last_check attributes are also updated.
- clear: entry status is set to clear if the policy action succeeds. Only last_check attribute is updated.
Alerter plugin provides an alert action, that can be invoked like this in the policy parameters or the policy rules:
action = alerter.alert ;
Depending on robinhood configuration (in 'Log' block), this action will log the entry to an alert file or send an alert mail to the configured alert address. This action interprets the title action parameter, that is used as the alert short title.
Instead, you can also use custom alert actions to report alerts (e.g. common.log).
To easily configure alerts, you can use the pre-configured 'alert' policy defined in configuration templates (includes/alerts.inc).
Here is an example of alert policy using it:
%include "includes/alerts.inc" alert_rules { # don't check entries more frequently than daily ignore { last_check < 1d } # don't check entries while they are modified ignore { last_mod < 1h } rule raise_alert { ## List all fileclasses that would raise alerts BELOW: target_fileclass = f1; target_fileclass = f2; target_fileclass = largedir; # customize alert title: action_params { title = "entry matches '{fileclass}' ({rule})"; } # apply to all matching fileclasses in the policy scope condition = true; } # do nothing and clear alert status for other entries rule default { action = none; action_params { alert = clear; } # apply to all entries that don't match rule 'raise_alert' condition = true; } }
Checker purpose is to run a command at regular interval on filesystem entries, save its output (on success), and reuse this output for next checks. A common use case is detecting silent data corruption: it computes file checksums, and verifies this checksum is not altered the next time it is checked (if the entry has not been modified).
You can find an example of policy using checker in /etc/robinhood.d/templates: see checksum policy.
Checker status can take the following values:
- '' (empty): the policy action has never been run on this entry.
- ok: policy action ran successfully on the entry (return code = 0).
- failed: policy action failed for this entry (return code != 0).
Checker maintains the following attributes for entries:
- checker.last_check: last time the check command was run on the entry.
- checker.last_success: last time the check command succeeded on the entry (with is also the time when the output parameter was returned).
- checker.output: output of the last successful check command.
Robinhood provides an example implementation of check command, installed as /usr/sbin/rbh_cksum.sh. This script expects 2 arguments: it own previous output and entry path. So it could be called like this in the policy declaration:
default_action = cmd("/usr/sbin/rbh_cksum.sh '{output}' '{path}'");
The output of this script is a peer "version:sha1_sum".
version helps the script to determine if the file has changed since the last call. On POSIX, this is based on last modification time and size of the file. On Lustre, it is based on a specific "data_version" value.
- If the file has changed, the old checksum is not relevant. The script computes the new checksum (sha1) and returns the new version and checksum.
- If the file version is unchanged, the script compute the checksum again and verifies it is unchanged. If the checksum changed, the script reports a check failure.
Of course, you can write your own check command (or function in a robinhood plugin) to replace rbh_cksum.sh. They must follow the following interface:
- Functions (defined in modules):
- As input, a function action should use the value of 'output' attribute to compare the result of the last execution.
- As output, a function action can store its result to 'output' attribute.
- External commands (command line):
- As input, a command can be invoked with the last output by using '{output}' placeholder.
- As output, output will be set as the contents of stdout of the command (truncated to 255 char).
If the command or action succeeds, the value of check status is set to ok. The command output is stored as output attribute, and last_check and last_success attributes are updated.
Purpose of the modeguard status manager is to check the UNIX access rights of a set of entries matching the policy rules. It can force clearing a set of access rights (for example, clear the 'w' bit for other), or force setting access rights (for example, setgid bit for some directories).
An example of modeguard-based policy in defined in /etc/robinhood.d/includes/modeguard.inc. There is an example usage of this policy in /etc/robinhood.d/templates.
Modeguard configuration consists of 2 masks in octal format:
- set_mask: mask of access rights to be enforced.
- clear_mask: mask of access rights to be cleared.
Example:
modeguard_config { # set gid bit set_mask = 02000; # clear 'w' bit for other clear_mask = 0002; }
Once a modeguard policy in defined, entry's mode is checked each time robinhood discovers or updates the entry (when scanning, or reading changelogs). Modeguard status can take the following values:
- ok: entry’s permission bits are compliant with "set_mask" and/or "clear_mask".
- invalid: entry’s permission bits violate "set_mask" and/or "clear_mask".
Modeguard provides an enforce_mode action that enforces the set_mask and clear_mask of an entry.
Back to wiki home