location: Current position: Home >> Scientific Research >> Paper Publications

Strong consistency of random gradient-free algorithms for distributed optimization

Hits:

Indexed by:期刊论文

Date of Publication:2017-03-01

Journal:OPTIMAL CONTROL APPLICATIONS & METHODS

Included Journals:SCIE、EI

Volume:38

Issue:2

Page Number:247-265

ISSN No.:0143-2087

Key Words:distributed optimization; multi-agent systems; random gradient-free method; Gaussian smoothing; convergence analysis

Abstract:A distributed multi-agent convex optimization problem over a network with time-varying connectivity is considered, where the agents are to collectively minimize a sum of nonsmooth but Lipschitz continuous functions subject to a convex constraint set. Based on Gaussian smoothing of the objective functions, a distributed projective random gradient-free algorithm is considered for the constrained optimization problem, where each agent performs a local averaging operation, takes the one-sided or two-sided randomized gradient approximates instead of the subgradients to minimize its own objective function, and projects on the constraint set. The bounds on the limiting performance of the algorithm in mean are obtained, and it is proven that there is mean consensus between agents for diminishing step sizes. It is showed that with appropriately selected step sizes, the agent estimates generated by the algorithm converge to the same stationary point of the smoothed version of the problem with probability 1. Therefore, with sufficient small smoothing parameters, we obtain the result of consensus and convergence of the agent estimates to an optimal solution with probability 1. A numerical example is provided to justify the theoretical analysis. Copyright (c) 2016 John Wiley & Sons, Ltd.

Next One:Randomized Gradient-Free Distributed Algorithms through Sequential Gaussian Smoothing