Skip to content

Official implementation for paper "No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation", ICML 2023

License

Notifications You must be signed in to change notification settings

Hypervoyager/PFL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation

Official implementation for paper "No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation", ICML 2023

TLDR: We achieve parallel computing between the central server and the edge nodes in Federated Learning.

If your project requires executing complex computational tasks on the central server, please use our solution! Our framework allows the aggregation process on the central server and the training process on edge devices to conduct in parallel, thereby improving training efficiency.

Citation

If you use this code, please cite our paper.

  title={No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation},
  author={Zhang, Feilong, and Liu, Xianming, and Lin, Shiyi and Wu, Gang and Zhou, Xiong and Jiang, junjun, and Ji, Xiangyang},
  booktitle={International Conference on Machine Learning},
  year={2022},
  organization={PMLR}
}

Usage

Here is an example for PyTorch:

python PFL.py --dataset cifar10 --node_num 10 --max_lost 3 --R 200 --E 5

About

Official implementation for paper "No One Idles: Efficient Heterogeneous Federated Learning with Parallel Edge and Server Computation", ICML 2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages