Relaxed conditions for convergence analysis of online back-propagation algorithm with L-2 regularizer for Sigma-Pi-Sigma neural network
点击次数:
论文类型:期刊论文
发表时间:2018-01-10
发表刊物:NEUROCOMPUTING
收录刊物:SCIE、EI、Scopus
卷号:272
页面范围:163-169
ISSN号:0925-2312
关键字:L-2 regularizer; Sigma-Pi-Sigma network; Convergence; Boundedness
摘要:The properties of a boundedness estimations are investigated during the training of online back-propagation method with L-2 regularizer for Sigma-Pi-Sigma neural network. This brief presents a unified convergence analysis, exploiting theorems of White for the method of stochastic approximation. We apply the method of regularizer to derive estimation bounds for Sigma-Pi-Sigma network, and also give conditions for determinating convergence ensuring that the back-propagation estimator converges almost surely to a parameter value which locally minimizes the expected squared error loss. Besides, some weight boundedness estimations are derived through the squared regularizer, after that the boundedness is exploited to prove the convergence of the algorithm. A simulation is also given to verify the theoretical findings. (C) 2017 Elsevier B.V. All rights reserved.
发表时间:2018-01-10