This is the official implementation of our paper A Fine-grained Differentially Private Federated Learning against Leakage from Gradients, accepted by IEEE Internet of Things Journal, 2021. This research project is developed based on Python 3 and Pytorch.
If our work or this repo is useful for your research, please cite our paper as follows:
@ARTICLE{9627872,
author={Zhu, Linghui and Liu, Xinyi and Li, Yiming and Yang, Xue and Xia, Shu-Tao and Lu, Rongxing},
journal={IEEE Internet of Things Journal},
title={A Fine-Grained Differentially Private Federated Learning Against Leakage From Gradients},
year={2022},
volume={9},
number={13},
pages={11500-11512},
doi={10.1109/JIOT.2021.3131258}}
To install requirements:
pip install -r requirements.txt
Make sure the directory follows:
stealingverification
├── data
│ ├── cifar10
│ └── ...
├── ckpt
│
├── pogz
│
├── model
|
Load a pretrained local model and calculate the PoGZ of each layer with the local valid dataset. The PoGZ
python get_pogz.py --dataset=dataset_name --resume_path=./ckpt/path_to_pretrained_model.pt --local_val_dataset_path=./path_to_local_val_dataset/
The result will be saved in ./pogz/ .
Load a updated client model and add noised.
python add_noise.py --resume_path=./ckpt/path_to_updated_local_model.pt --dataset=dataset_name
The defense against DLG with a euclidean distance as the loss function. The noise is injected into all the layers of the shared gradients with DPFL. The adversary tries to recover local data from the gradients and the leakage can not be preformed.