吴微

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:英国牛津大学数学所

学位:博士

所在单位:数学科学学院

学科:计算数学

电子邮箱:wuweiw@dlut.edu.cn

扫描关注

论文成果

当前位置: 吴微 >> 科学研究 >> 论文成果

Input Layer Regularization of Multilayer Feedforward Neural Networks

点击次数:

论文类型:期刊论文

发表时间:2017-06-28

发表刊物:IEEE ACCESS

收录刊物:SCIE、EI、Scopus

卷号:5

页面范围:10979-10985

ISSN号:2169-3536

关键字:Multilayer feedforward neural network; autoencoder; compressive sensing; regularization of input layer; L-1 and L-1/2 regularization

摘要:Multilayer feedforward neural networks (MFNNs) have been widely used for classification or approximation of nonlinear mappings described by a data set consisting of input and output samples. In many MFNN applications, a common compressive sensing task is to find the redundant dimensions of the input data. The aim of a regularization technique presented in this paper is to eliminate the redundant dimensions and to achieve compression of the input layer. It is achieved by introducing an L-1 or L-1/2 regularizer to the input layer weights training. As a comparison, in the existing references, a regularization method is usually applied to the hidden layer for a better representation of the dataset and sparsification of the network. Gradient-descent method is used for solving the resulting optimization problem. Numerical experiments including a simulated approximation problem and three classification problems (Monk, Sonar, and the MNIST data set) have been used to illustrate the algorithm.