From dc5085de9d62f8da12b550f1a606919f68b3cea1 Mon Sep 17 00:00:00 2001 From: Richard Preen Date: Wed, 5 Jun 2024 15:31:46 +0100 Subject: [PATCH] clean up readme --- CONTRIBUTING.md | 35 +++++++++++++++++++++++++++++--- README.md | 50 +++++++++------------------------------------- examples/README.md | 19 ++++++++++++++++++ 3 files changed, 60 insertions(+), 44 deletions(-) create mode 100644 examples/README.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 086bad5f..d260fa84 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,4 +1,35 @@ -# General guidance for contributors +# General Guidance for Contributors + +## Development + +Clone the repository and install the local package including all dependencies within a virtual environment: + +``` +$ git clone https://github.com/AI-SDC/AI-SDC.git +$ cd AI-SDC +$ pip install .[test] +``` + +Then to run the tests: + +``` +$ pytest . +``` + +## Directory Structure + +* `aisdc` Contains the aisdc source code. + - `attacks` Contains a variety of privacy attacks on machine learning models. + - `preprocessing` Contains preprocessing modules for test datasets. + - `safemodel` The safemodel wrappers for common machine learning models. +* `docs` Contains Sphinx documentation files. +* `examples` Contains examples of how to run the code contained in this repository. +* `tests` Contains unit tests. +* `user_stories` Contains user guides. + +## Documentation + +Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/ ## Style Guide @@ -26,8 +57,6 @@ To install as a hook that executes with every `git commit`: $ pre-commit install ``` -******************************************************************************* - ## Automatic Documentation The documentation is automatically built using [Sphinx](https://www.sphinx-doc.org) and github actions. diff --git a/README.md b/README.md index f54a73ce..c28f448a 100644 --- a/README.md +++ b/README.md @@ -6,38 +6,22 @@ # AI-SDC -A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233). +A collection of tools and resources for managing the [statistical disclosure control](https://en.wikipedia.org/wiki/Statistical_disclosure_control) of trained [machine learning](https://en.wikipedia.org/wiki/Machine_learning) models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233). -### User Guides +The `aisdc` package provides: +* A variety of privacy attacks for assessing machine learning models. +* The safemodel package: an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control. -A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder. - -## Content +## User Guides -* `aisdc` - - `attacks` Contains a variety of privacy attacks on machine learning models, including membership and attribute inference. - - `preprocessing` Contains preprocessing modules for test datasets. - - `safemodel` The safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control. -* `docs` Contains Sphinx documentation files. -* `example_notebooks` Contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms. -* `examples` Contains examples of how to run the code contained in this repository: - - How to simulate attribute inference attacks `attribute_inference_example.py`. - - How to simulate membership inference attacks: - + Worst case scenario attack `worst_case_attack_example.py`. - + LIRA scenario attack `lira_attack_example.py`. - - Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`. -* `risk_examples` Contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214). -* `tests` Contains unit tests. - -## Documentation +A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder. -Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/ -## Installation / End-user +## Installation [![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc) -Install `aisdc` (safest in a virtual env) and manually copy the [`examples`](examples/) and [`example_notebooks`](example_notebooks/). +Install `aisdc` and manually copy the [`examples`](examples/). To install only the base package, which includes the attacks used for assessing privacy: @@ -61,24 +45,8 @@ For example, to run the `lira_attack_example.py`: $ python -m lira_attack_example ``` -## Development - -Clone the repository and install the local package including all dependencies (safest in a virtual env): - -``` -$ git clone https://github.com/AI-SDC/AI-SDC.git -$ cd AI-SDC -$ pip install .[test] -``` - -Then run the tests: - -``` -$ pytest . -``` - --- -This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO -MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER - MC_PC_21033).­ This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES. +This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the [DARE UK](https://dareuk.org.uk) (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO; MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER; MC_PC_21033).­This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES. diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 00000000..ff31b26e --- /dev/null +++ b/examples/README.md @@ -0,0 +1,19 @@ +# Examples + +This folder contains examples of how to run the code contained in this repository. + +## Scripts + +* How to simulate attribute inference attacks: `attribute_inference_example.py`. +* How to simulate membership inference attacks: + - Worst case scenario attack: `worst_case_attack_example.py`. + - LIRA scenario attack: `lira_attack_example.py`. +* Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`. + +## Notebooks + +The `notebooks` folder contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms. + +## Risk Examples + +The `risk_examples` folder contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214).