This repository contains code for our submission to ICLR workshop 24. We implement backdoor-poisoning attacks and evaluate our defense against these attacks on Image Classification Task. Attacks implemented:
Defenses implemented:
- Gaussian Blur (Invariance-Equivariance)
- Luminance (Exploiting evasiveness)
Datasets implemented:
- CIFAR 10
- CIFAR 100
- IMAGENET 100 (100 class subset of IMAGENET)
We show a successful and partly generalizable defense against backdoor attacks in SSL and lay the theoretical foundation for defense against backdoor attacks in Semantic Segmentation and Object Detection tasks.
- Download the repository from anonymous4openscience.
- Make a virtual environment (optional) (recommended)
virtualenv <env_name> source <env_name>/bin/activate
- Install necessary libraries
pip install -r requirements.txt
We provide a bash script to run our program with appropriate command-line arguments.
- Give permission
chmod +x run.sh
- Call
main_train.py
bash run.sh <--args values>
Call bash run.sh --help
if unsure about the arguments, available options or their meaning.
Results of all experiments are saved in a folder named saves
. Each experiment will create a folder named <job name>
set by run.sh
. Each experiment folder contains model state-dicts and optimizer states saved every 100 epoch, and a tfenvent
file containing tensorboard log. To view training progress and compare training curves:
tensorboard --logdir=saves
When repeating the same experiment with different hyperparameters, use --suffix
option in run.sh
to prevent overwriting log of previous experiment.
- Implement ImageNet 100
- Remove unnecessary command line arguments and refactor code.
- JEPA
- MoCo v2
- ImageNet
- https://robustbench.github.io/