Hits:
Indexed by:会议论文
Date of Publication:2017-01-01
Included Journals:EI、CPCI-S、Scopus
Page Number:8407-8412
Key Words:Nonsmooth distributed optimization; randomized gradient-free algorithm; sequential Gaussian smoothing; directed graphs
Abstract:Randomized gradient-free algorithms through sequential Gaussian smoothing are proposed for distributed optimization over time-varying random network, where the collective goal of agents is to minimize the sum of locally known cost functions. Each agent has access to its own nonsmooth convex function, constrained to a commonly known convex set. Based on sequential Gaussian smoothing of the objective functions, distributed projective randomized gradient-free algorithms are proposed for the constrained optimization problem, where each agent performs a local averaging operation, takes the one-sided or two-sided randomized gradient approximates instead of the subgradients to minimize its own objective function, and projects on the constraint set. The bounds on the limiting performance of the algorithm in mean are obtained and the existence of mean and almost sure consensus between agents is proven. It is showed that, with appropriately selected sequences of step sizes and smoothing parameters, the agent estimates generated by the algorithm converges to the same optimal solution with probability 1.