Skip to content

Latest commit

 

History

History

examples

Examples

This folder contains examples of how to run the code contained in this repository.

Scripts

Contents

  • Examples training a target model:
    • train_rf_breast_cancer.py - Trains RF on breast cancer dataset.
    • train_rf_nursery.py - Trains RF on nursery dataset with one-hot encoding.
  • Examples programmatically running attacks:
    • attack_lira.py - Simulated LiRA membership inference attack on breast cancer RF.
    • attack_worstcase.py - Simulated worst case membership inference attack on breast cancer RF.
    • attack_attribute.py - Simulated attribute inference attack on nursery RF.
  • Examples of attack integration within safemodel classes:
    • safemodel.py - Simulated attacks on a safe RF trained on the nursery dataset.

Programmatic Execution

To run a programmatic example:

  1. Run the relevant training script.
  2. Run the desired attack script.

For example:

$ python -m examples.train_rf_breast_cancer
$ python -m examples.attack_lira

Command Line Interface (CLI) Execution

  1. Run the relevant training script.
  2. Generate an attack.yaml config.
  3. Run the attack CLI tool.

For example:

$ python -m examples.train_rf_nursery
$ sacroml gen-attack
$ sacroml run target_rf_nursery attack.yaml

If you are unable to use the Python Target class to generate the target_dir/ containing the target.yaml you can generate one using the CLI tool:

$ sacroml gen-target

User Stories

A collection of user guides can be found in the user_stories folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README located in the folder.

Notebooks

The notebooks folder contains short tutorials on the basic concept of "safe" versions of machine learning algorithms, and examples of some specific algorithms.

Risk Examples

The risk_examples contains hypothetical examples of data leakage through ML models as described by Jefferson et al. (2022).