Skip to content

ZidongLiu/DeComFL

Repository files navigation

Dimension-Free Communication in Federated Learning (DeComFL)

ci apache

DeComFL is a library designed for training/fine-tuning deep learning models in the federated learning scenario. Its unique feature is the utilization of zeroth-order optimization, enabling communication between clients to be limited to just a few scalars, irrespective of the original model's size. This dimension-free communication is the inspiration behind the library's name.

Environment Setup

We use conda as our cross-platform environment management tool. However, due to macOS' lacking support for cuda, we have to make 2 different environment setup files:

  • Use environment.yml on macOS or if you do not have cuda at hand.
  • Use environment_cuda.yml otherwise.

For READMD.md, we will use environment.yml whenever an environment file is needed.

Set Up Steps

  1. Make sure conda is available. See https://conda.io/projects/conda/en/latest/user-guide/install/index.html for more detail.
  2. At the root of this repo, run conda env create -f environment.yml -y.
  3. Once installation is finished, run conda activate decomfl to use the created virtual env.
  4. (Optional) If you see something like conda init before activate. Run conda init, then restart your terminal/powershell. Then repeat step 3.
  5. Run any command provided in Run Experiments section. If code works, then congratulations, you have successfully set up the environment for this repo!

Run Experiments

  • Run zeroth-order random gradient estimate + SGD training. Train model using ZOO RGE. Usage example: python zo_rge_main.py --dataset=cifar10 --num-pert=10 --lr=1e-6 --mu=1e-3

  • Run DeComFL: Follow FL routine, split data into chunks and train on different clients. Usage example: python decomfl_main.py --large-model=opt-125m --dataset=sst2 --iterations=1000 --train-batch-size=32 --test-batch-size=200 --eval-iterations=25 --num-clients=3 --num-sample-clients=2 --local-update-steps=1 --num-pert=5 --lr=1e-5 --mu=1e-3 --grad-estimate-method=rge-forward

Citation

@article{li2024achieving,
  title={Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization},
  author={Li, Zhe and Ying, Bicheng and Liu, Zidong and Dong, Chaosheng and Yang, Haibo},
  journal={arXiv preprint arXiv:2405.15861},
  year={2024}
}

Contributors

DeComFL is currently contributed and maintained by Zidong Liu (ComboCurve), Bicheng Ying (Google) and Zhe Li (RIT), and advised by Prof. Haibo Yang (RIT).

Image 1
Image 2      Image 3

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •