Professor
Supervisor of Doctorate Candidates
Supervisor of Master's Candidates
Title of Paper:Relaxed conditions for convergence analysis of online back-propagation algorithm with L-2 regularizer for Sigma-Pi-Sigma neural network
Hits:
Date of Publication:2018-01-10
Journal:NEUROCOMPUTING
Included Journals:SCIE、EI、Scopus
Volume:272
Page Number:163-169
ISSN No.:0925-2312
Key Words:L-2 regularizer; Sigma-Pi-Sigma network; Convergence; Boundedness
Abstract:The properties of a boundedness estimations are investigated during the training of online back-propagation method with L-2 regularizer for Sigma-Pi-Sigma neural network. This brief presents a unified convergence analysis, exploiting theorems of White for the method of stochastic approximation. We apply the method of regularizer to derive estimation bounds for Sigma-Pi-Sigma network, and also give conditions for determinating convergence ensuring that the back-propagation estimator converges almost surely to a parameter value which locally minimizes the expected squared error loss. Besides, some weight boundedness estimations are derived through the squared regularizer, after that the boundedness is exploited to prove the convergence of the algorithm. A simulation is also given to verify the theoretical findings. (C) 2017 Elsevier B.V. All rights reserved.
Open time:..
The Last Update Time: ..