Skip to content

Commit

Permalink
clean up readme
Browse files Browse the repository at this point in the history
  • Loading branch information
rpreen committed Jun 5, 2024
1 parent 59b71ab commit dc5085d
Show file tree
Hide file tree
Showing 3 changed files with 60 additions and 44 deletions.
35 changes: 32 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,35 @@
# General guidance for contributors
# General Guidance for Contributors

## Development

Clone the repository and install the local package including all dependencies within a virtual environment:

```
$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install .[test]
```

Then to run the tests:

```
$ pytest .
```

## Directory Structure

* `aisdc` Contains the aisdc source code.
- `attacks` Contains a variety of privacy attacks on machine learning models.
- `preprocessing` Contains preprocessing modules for test datasets.
- `safemodel` The safemodel wrappers for common machine learning models.
* `docs` Contains Sphinx documentation files.
* `examples` Contains examples of how to run the code contained in this repository.
* `tests` Contains unit tests.
* `user_stories` Contains user guides.

## Documentation

Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/

## Style Guide

Expand Down Expand Up @@ -26,8 +57,6 @@ To install as a hook that executes with every `git commit`:
$ pre-commit install
```

*******************************************************************************

## Automatic Documentation

The documentation is automatically built using [Sphinx](https://www.sphinx-doc.org) and github actions.
Expand Down
50 changes: 9 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,38 +6,22 @@

# AI-SDC

A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).
A collection of tools and resources for managing the [statistical disclosure control](https://en.wikipedia.org/wiki/Statistical_disclosure_control) of trained [machine learning](https://en.wikipedia.org/wiki/Machine_learning) models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).

### User Guides
The `aisdc` package provides:
* A variety of privacy attacks for assessing machine learning models.
* The safemodel package: an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.

A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder.

## Content
## User Guides

* `aisdc`
- `attacks` Contains a variety of privacy attacks on machine learning models, including membership and attribute inference.
- `preprocessing` Contains preprocessing modules for test datasets.
- `safemodel` The safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.
* `docs` Contains Sphinx documentation files.
* `example_notebooks` Contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms.
* `examples` Contains examples of how to run the code contained in this repository:
- How to simulate attribute inference attacks `attribute_inference_example.py`.
- How to simulate membership inference attacks:
+ Worst case scenario attack `worst_case_attack_example.py`.
+ LIRA scenario attack `lira_attack_example.py`.
- Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`.
* `risk_examples` Contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214).
* `tests` Contains unit tests.

## Documentation
A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder.

Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/

## Installation / End-user
## Installation

[![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc)

Install `aisdc` (safest in a virtual env) and manually copy the [`examples`](examples/) and [`example_notebooks`](example_notebooks/).
Install `aisdc` and manually copy the [`examples`](examples/).

To install only the base package, which includes the attacks used for assessing privacy:

Expand All @@ -61,24 +45,8 @@ For example, to run the `lira_attack_example.py`:
$ python -m lira_attack_example
```

## Development

Clone the repository and install the local package including all dependencies (safest in a virtual env):

```
$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install .[test]
```

Then run the tests:

```
$ pytest .
```

---

This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO -MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER - MC_PC_21033).­ This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.
This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the [DARE UK](https://dareuk.org.uk) (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO; MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER; MC_PC_21033).­This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.

<img src="docs/source/images/UK_Research_and_Innovation_logo.svg" width="20%" height="20%" padding=20/> <img src="docs/source/images/health-data-research-uk-hdr-uk-logo-vector.png" width="10%" height="10%" padding=20/> <img src="docs/source/images/logo_print.png" width="15%" height="15%" padding=20/>
19 changes: 19 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Examples

This folder contains examples of how to run the code contained in this repository.

## Scripts

* How to simulate attribute inference attacks: `attribute_inference_example.py`.
* How to simulate membership inference attacks:
- Worst case scenario attack: `worst_case_attack_example.py`.
- LIRA scenario attack: `lira_attack_example.py`.
* Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`.

## Notebooks

The `notebooks` folder contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms.

## Risk Examples

The `risk_examples` folder contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214).

0 comments on commit dc5085d

Please sign in to comment.