李正学

个人信息Personal Information

副教授

硕士生导师

性别:男

毕业院校:吉林大学

学位:博士

所在单位:数学科学学院

电子邮箱:lizx@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Convergence of batch gradient learning algorithm with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks

点击次数:

论文类型:期刊论文

发表时间:2015-03-03

发表刊物:NEUROCOMPUTING

收录刊物:SCIE、EI、Scopus

卷号:151

期号:P1

页面范围:333-341

ISSN号:0925-2312

关键字:Sigma-Pi-Sigma neural networks; Batch gradient learning algorithm; Convergence; Smoothing L-1/2 regularization

摘要:Sigma-Pi-Sigma neural networks are known to provide more powerful mapping capability than traditional feed-forward neural networks. The L-1/2 regularizer is very useful and efficient, and can be taken as a representative of all the L-q(0 < q < 1) regularizers. However, the nonsmoothness of L-1/2 regulaiization may lead to oscillation phenomenon. The aim of this paper is to develop a novel batch gradient method with smoothing L-1/2 regularization for Sigma-Pi-Sigma neural networks. Compared with conventional gradient learning algorithm, this method produces sparser weights and simpler structure, and it improves the learning efficiency. A comprehensive study on the weak and strong convergence results for this algorithm are also presented, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed value, respectively. (C) 2014 Elsevier B.V. All rights reserved.