location: Current position: Home >> Scientific Research >> Paper Publications

Convergence of batch gradient learning algorithm with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks

Hits:

Indexed by:期刊论文

Date of Publication:2015-03-03

Journal:NEUROCOMPUTING

Included Journals:SCIE、EI、Scopus

Volume:151

Issue:P1

Page Number:333-341

ISSN No.:0925-2312

Key Words:Sigma-Pi-Sigma neural networks; Batch gradient learning algorithm; Convergence; Smoothing L-1/2 regularization

Abstract:Sigma-Pi-Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L-1/2 regularizer is very useful and efficient, and can be taken as a representative of all the L-q(0 < q < 1) regularizers. However, the nonsmoothness of L-1/2 regulaiization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively. (C) 2014 Elsevier B.V. All rights reserved.

Pre One:Convergence Analysis of a New Self Organizing Map Based Optimization (SOMO) Algorithm

Next One:基于蚁群迭代算法的近似测地线计算