This folder contains examples of how to run the code contained in this repository.
- Examples training a target model:
train_rf_breast_cancer.py
- Trains RF on breast cancer dataset.train_rf_nursery.py
- Trains RF on nursery dataset with one-hot encoding.
- Examples programmatically running attacks:
attack_lira.py
- Simulated LiRA membership inference attack on breast cancer RF.attack_worstcase.py
- Simulated worst case membership inference attack on breast cancer RF.attack_attribute.py
- Simulated attribute inference attack on nursery RF.
- Examples of attack integration within safemodel classes:
safemodel.py
- Simulated attacks on a safe RF trained on the nursery dataset.
To run a programmatic example:
- Run the relevant training script.
- Run the desired attack script.
For example:
$ python -m examples.train_rf_breast_cancer
$ python -m examples.attack_lira
- Run the relevant training script.
- Generate an
attack.yaml
config. - Run the attack CLI tool.
For example:
$ python -m examples.train_rf_nursery
$ sacroml gen-attack
$ sacroml run target_rf_nursery attack.yaml
If you are unable to use the Python Target
class to generate the target_dir/
containing the target.yaml
you can generate one using the CLI tool:
$ sacroml gen-target
A collection of user guides can be found in the user_stories
folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README located in the folder.
The notebooks
folder contains short tutorials on the basic concept of "safe" versions of machine learning algorithms, and examples of some specific algorithms.
The risk_examples
contains hypothetical examples of data leakage through ML models as described by Jefferson et al. (2022).