吴微

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:英国牛津大学数学所

学位:博士

所在单位:数学科学学院

学科:计算数学

电子邮箱:wuweiw@dlut.edu.cn

扫描关注

论文成果

当前位置: 吴微 >> 科学研究 >> 论文成果

Batch gradient method with smoothing L-1/2 regularization for training of feedforward neural networks

点击次数:

论文类型:期刊论文

发表时间:2014-02-01

发表刊物:NEURAL NETWORKS

收录刊物:SCIE、EI、PubMed、Scopus

卷号:50

页面范围:72-78

ISSN号:0893-6080

关键字:Feedforward neural networks; Batch gradient method; Smoothing L-1/2 regularization; Convergence

摘要:The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L-1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L-1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L-1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L-1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided. (C) 2013 Elsevier Ltd. All rights reserved.