吴微

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:英国牛津大学数学所

学位:博士

所在单位:数学科学学院

学科:计算数学

电子邮箱:wuweiw@dlut.edu.cn

扫描关注

论文成果

当前位置: 吴微 >> 科学研究 >> 论文成果

Convergence of online gradient method for feedforward neural networks with smoothing L-1/2 regularization penalty

点击次数:

论文类型:期刊论文

发表时间:2014-05-05

发表刊物:NEUROCOMPUTING

收录刊物:SCIE、EI、Scopus

卷号:131

页面范围:208-216

ISSN号:0925-2312

关键字:Feedforward neural networks; Online gradient method; Smoothing L-1/2 regularization; Boundedness; Convergence

摘要:Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L-2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing L-1/2 regularization term. For normal L-1/2 regularization, the objective function is the sum of a non-convex, non-smooth, and non-Lipschitz function, which causes oscillation of the error function and the norm of gradient. However, using the smoothing approximation techniques, the deficiency of the normal L-1/2 regularization term can be addressed. This paper shows the strong convergence results for the smoothing L-1/2 regularization. Furthermore, we prove the boundedness of the weights during the network training. The assumption that weights are bounded is no longer needed for the proof of convergence. Simulation results support the theoretical findings and demonstrate that our algorithm has better performance than two other algorithms with L-2 and normal L-1/2 regularizations respectively. (C) 2013 Elsevier B.V. All rights reserved.