陈性敏

个人信息Personal Information

副教授

硕士生导师

性别:男

毕业院校:中国科学院

学位:博士

所在单位:数学科学学院

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Strong consistency of random gradient-free algorithms for distributed optimization

点击次数:

论文类型:期刊论文

发表时间:2017-03-01

发表刊物:OPTIMAL CONTROL APPLICATIONS & METHODS

收录刊物:SCIE、EI

卷号:38

期号:2

页面范围:247-265

ISSN号:0143-2087

关键字:distributed optimization; multi-agent systems; random gradient-free method; Gaussian smoothing; convergence analysis

摘要:A distributed multi-agent convex optimization problem over a network with time-varying connectivity is considered, where the agents are to collectively minimize a sum of nonsmooth but Lipschitz continuous functions subject to a convex constraint set. Based on Gaussian smoothing of the objective functions, a distributed projective random gradient-free algorithm is considered for the constrained optimization problem, where each agent performs a local averaging operation, takes the one-sided or two-sided randomized gradient approximates instead of the subgradients to minimize its own objective function, and projects on the constraint set. The bounds on the limiting performance of the algorithm in mean are obtained, and it is proven that there is mean consensus between agents for diminishing step sizes. It is showed that with appropriately selected step sizes, the agent estimates generated by the algorithm converge to the same stationary point of the smoothed version of the problem with probability 1. Therefore, with sufficient small smoothing parameters, we obtain the result of consensus and convergence of the agent estimates to an optimal solution with probability 1. A numerical example is provided to justify the theoretical analysis. Copyright (c) 2016 John Wiley & Sons, Ltd.