吴微

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:英国牛津大学数学所

学位:博士

所在单位:数学科学学院

学科:计算数学

电子邮箱:wuweiw@dlut.edu.cn

扫描关注

论文成果

当前位置: 吴微 >> 科学研究 >> 论文成果

Group L-1(/2) Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks

点击次数:

论文类型:期刊论文

发表时间:2019-01-01

发表刊物:IEEE ACCESS

收录刊物:SCIE

卷号:7

页面范围:9540-9557

ISSN号:2169-3536

关键字:Feedforward neural networks; pruning hidden layer nodes and weights; group L-1(/2); smooth group L-1/2; group lasso; convergence

摘要:A group L-1(/2) regularization term is defined and introduced into the conventional error function for pruning the hidden layer nodes of feedforward neural networks. This group L-1(/2) regularization method (GL(1/2)) can prune not only the redundant hidden nodes but also the redundant weights of the surviving hidden nodes of the neural networks. As a comparison, the popular group lasso regularization (GL(2)) can prune the redundant hidden nodes, but cannot prune any redundant weights of the surviving hidden nodes, of the neural networks. A disadvantage of the GL(1/2) is that it involves a non-smooth absolute value function, which causes oscillation in the numerical computation and difficulty in the convergence analysis. As a remedy, the absolute value function is approximated by a smooth function, resulting in a smooth group L-1(/2) regularization method (SGL(1/2)). Numerical simulations on a few benchmark data sets show that, compared with GL(2), SGL(1/2) can achieve better accuracy and remove more redundant nodes and weights of the surviving hidden nodes. A convergence theorem is also proved for SGL(1/2).