Skip to content

Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''

License

Notifications You must be signed in to change notification settings

YiZeng623/I-BAU

Repository files navigation

I-BAU: Adversarial-Unlearning-of-Backdoors-via-Implicit-Hypergradient

Python 3.6 Pytorch 1.8.1 CUDA 10.1

Official Implementation of ICLR 2022 paper, Adversarial Unlearning of Backdoors via Implicit Hypergradient [openreview][video] .

We propose a novel minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data:

To solve the minimax problem, we propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm, which utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. I-BAU requires less computation to take effect; particularly, it is more than 13 X faster than the most efficient baseline in the single-target attack setting. It can still remain effective in the extreme case where the defender can only access 100 clean samples — a setting where all the baselines fail to produce acceptable results . Picture1

Requirements

This code has been tested with Python 3.6, PyTorch 1.8.1 and cuda 10.1.

Usage & HOW-TO

  • Install required packages.
  • Get poisoned models prepared in the directory ./checkpoint/.
  • We provide two examples on poisoned models trained on GTSRB and CIFAR10 datasets, check clean_solution_batch_op..._cifar.ipynb and clean_solution_batch_op..._gtsrb.ipynb for more details.
  • For a more flexible usage, run python defense.py. An example is as follow:
python defense.py --dataset cifar10 --poi_path './checkpoint/badnets_8_02_ckpt.pth'  --optim Adam --lr 0.001 --n_rounds 3 --K 5

Clean data used for backdoor unlearning can be specified with argument --unl_set; if it is not specified, then a subset of data from testset will be used for unlearning.

  • For more information regarding training options, please check the help message:
    python defense.py --help.

Poster

Citation

If you find our work useful please cite:

@inproceedings{zeng2021adversarial,
  title={Adversarial Unlearning of Backdoors via Implicit Hypergradient},
  author={Zeng, Yi and Chen, Si and Park, Won and Mao, Zhuoqing and Jin, Ming and Jia, Ruoxi},
  booktitle={International Conference on Learning Representations},
  year={2021}
}

Special thanks to...

Stargazers repo roster for @YiZeng623/I-BAU Forkers repo roster for @YiZeng623/I-BAU

About

Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published