Adversarial Attacks and Defences (AAD) is a Python framework for defending machine learning models from adversarial examples.
Install module as a package using setuptools
-
Use a virtual environment
virtualenv --system-site-packages -p python3 ./venv source ./venv/bin/activate
-
Install you project from
pip
pip install --upgrade pip pip install -e . pip check pip freeze
-
Run the code demo from Jupyter Lab
cd ./examples jupyter lab
-
Run the script from terminal
python ./cmd/train.py -d MNIST -e 5 -vw
root
├─┬ aad
│ ├── attacks # modules of adversarial attacks
│ ├── basemodels # modules of base classification models
│ ├── datasets # data loader helper module
│ └── defences # modules of adversarial defences
├── cmd # scripts for terminal interface
├── data # dataset
├── examples # code examples
├── log # logging files
├── save # saved pre-trained models
├── tests # unit tests
The terminal scripts are separated into 3 parts: train, attack and defence.
-
To train a model:
python ./cmd/train.py --help
-
To attack a model:
python ./cmd/attack.py --help
-
To defend a model:
python ./cmd/defend_ad.py --help
Examples are available under the ./examples/
folder.