Skip to content

A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)

Notifications You must be signed in to change notification settings

git-disl/Lockdown

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training

This is the repo for the code and datasets used in the paper Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training, accepted by the NeurIPS 2023. The camera ready paper is available here.

Algorithm overview

The overall procedure can be summarized into four main steps. i) Isolated subspace training. ii)Subspace searching. iii) Aggregation. iv) Model cleaning with consensus fusion. The following figure illustrates the overall process.

Get started

Package requirement

  • PyTorch
  • Numpy
  • TorchVision

Data preparation

Dataset FashionMnist and CIFAR10/100 will be automatically downloaded with TorchVision.

Command to run

The following code run lockdown in its default setting

python federated.py  --method lockdown 

You can also find script in directory src/script.

Files organization

  • The main simulation program is in federated.py, where we initialize the benign and poison dataset, call clients to do local training, call aggregator to do aggregation, do consensus fusion before testing, etc.

  • The Lockdown's client local training logistic is in agent_sparse.py.

  • The vanilla FedAvg' client local training logistic is in agent.py.

  • The aggregation logistic is in aggregation.py, where we implement multiple defense baselines.

  • The data poisoning, data preparation and data distribution logistic is in utils.py.

Logging and checkpoint

The logging files will be contained in src/logs. Benign accuracy, ASR, and Backdoor accuracy will be tested in every round. For Lockdown, the three metrics correspond to the following logging format:

| Clean Val_Loss/Val_Acc: (Benign loss) / (Benign accuracy) |
| Clean Attack Success Ratio: (ASR loss)/ (ASR) |
| Clean Poison Loss/Clean Poison accuracy:: (Backdoor Loss)/ (Backdoor Acc)|

Model checkpoints will be saved every 25 rounds in the directory src/checkpoint.

Q&A

If you have any questions, you can either open an issue or contact me (thuang374@gatech.edu), and I will reply as soon as I see the issue or email.

Acknowledgment

The codebase is modified and adapted from one of our baselines RLR.

License

Lockdown is completely free and released under the MIT License.

About

A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published