In the presented work, we study the distributed optimization over directed graphs with row-stochastic weights as each agent need not know of its out-neighbors, which is not practical in real-world applications. In general, the information exchange between the agents are asynchronous and hence we keep the main focus on asynchronous networks, i.e., network with delays in the communication. ADD-OPT is proven to linearly converge to optimal value when the weight matrix is column-stochastic, and on the other hand, the FROST algorithm is best and converges faster than the previous one with row-stochastic weights. In simple words, FROST is better than ADD-OPT in convergence and more practical for real-world problems. The contribution of the present work is to implement the asynchronous versions for ADD-OPT and FROST and to study the convergence results for better performance and convergence.
In the presented distributed network, n agents caters to solve a distributed problem in P1
.
where each agent locally solves private cost function fi(x) and communicates to it's out neighbours to find the global minimizer of aggregate cost F(x)
The private cost function used at each agent is a Distributed Quadratic cost function
defined as,
The outputs of the research are to study,
- convergence of
ADD-OPT
andFROST
algorithms in case of synchronous networks (network with no delays and no switchings) - implementation of
ADD-OPT
andFROST
algorithms in case of asynchronous networks (network with delays and switchings) - compare residual plots in case of different delays in the network and discuss the results
- compare the
Average MSE
of FROST and ADD-OPT in case of asynchronous networks
The algorithms are developed with MATLAB (R2020b)
and tested on Windows
platform. MATLAB
is sufficient enough for the execution of the scripts, preferably R2020a
or R2020b
. The back-compatibility has not been tested if you're using the previous version of the software. Also, the evaluation of algorithms has been tested only on the Windows platform at the moment.
In case of any issues, kindly reach me at narahari.kasagattaramesh@aalto.fi
The project is organized into three folders, namely:
sync_networks
contains implementation ofpush-sum
,subgradient-push
,ADD-OPT
,Projected Subgradient
andFROST
algorithms with synchronous networksasync_networks
contains an implementation ofADD-OPT
andFROST
algorithms with asynchronous networkstriton_scripts
contains an testscripts and logs of ADD-OPT with delay networks for 28 step-sizes with 10 delays for eachassets
containsplots
containing convergence and residual plots (.png format) & matlab workspace variables are stored inmatvar
The following section describes the .m
files and function
s used in the project.
compare_residual_plots.m
script generating residual plot comparisioncompute_gradient.m
function handle to compute the gradient of function at xcompute_residual.m
function handle to compute the average mean-square-error for any algorithm
Change the working directory to the path where code is stored and run the required .m
file.
This is an ongoing part of my Master Thesis at Aalto University. The Master thesis is being supervised by Themistoklis Charalambous, Associate Professor, Aalto University. Also, he heads Distributed and Networked Control Systems research group, which focuses on blending control theory, communication theory and information theory.
I am being part of the research group for my master thesis which focuses primalrily to reproduce the distributed optimisation algorithms proposed in research paper as mentioned in Reference Work
and to extend the implementaion of ADD-OPT (Section III-B3)
and FROST (Section IV-A)
for time-varying graphs with delays, with switchings, and both delays & switchings.
Xin, R., Xi, C. & Khan, U. FROST—Fast row-stochastic optimization with uncoordinated step-sizes
. EURASIP J. Adv. Signal Process. 2019, 1 (2019)