Skip to content

a Python framework for defending machine learning models from adversarial examples.

Notifications You must be signed in to change notification settings

changx03/adversarial_attack_defence

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Attacks and Defences

Adversarial Attacks and Defences (AAD) is a Python framework for defending machine learning models from adversarial examples.

Required Libraries

Build And Install The Package

Install module as a package using setuptools

  1. Use a virtual environment

    virtualenv --system-site-packages -p python3 ./venv
    source ./venv/bin/activate
  2. Install you project from pip

    pip install --upgrade pip
    pip install -e .
    pip check
    pip freeze
  3. Run the code demo from Jupyter Lab

    cd ./examples
    jupyter lab
  4. Run the script from terminal

    python ./cmd/train.py -d MNIST -e 5 -vw

Code Structure

root
├─┬ aad
│ ├── attacks    # modules of adversarial attacks
│ ├── basemodels # modules of base classification models
│ ├── datasets   # data loader helper module
│ └── defences   # modules of adversarial defences
├── cmd          # scripts for terminal interface
├── data         # dataset
├── examples     # code examples
├── log          # logging files
├── save         # saved pre-trained models
├── tests        # unit tests

Run Script From Terminal

The terminal scripts are separated into 3 parts: train, attack and defence.

  • To train a model:

    python ./cmd/train.py --help
  • To attack a model:

    python ./cmd/attack.py --help
  • To defend a model:

    python ./cmd/defend_ad.py --help

Examples

Examples are available under the ./examples/ folder.

About

a Python framework for defending machine learning models from adversarial examples.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published