Hits:
Indexed by:期刊论文
Date of Publication:2017-06-28
Journal:IEEE ACCESS
Included Journals:SCIE、EI、Scopus
Volume:5
Page Number:10979-10985
ISSN No.:2169-3536
Key Words:Multilayer feedforward neural network; autoencoder; compressive sensing; regularization of input layer; L-1 and L-1/2 regularization
Abstract:Multilayer feedforward neural networks (MFNNs) have been widely used for classification or approximation of nonlinear mappings described by a data set consisting of input and output samples. In many MFNN applications, a common compressive sensing task is to find the redundant dimensions of the input data. The aim of a regularization technique presented in this paper is to eliminate the redundant dimensions and to achieve compression of the input layer. It is achieved by introducing an L-1 or L-1/2 regularizer to the input layer weights training. As a comparison, in the existing references, a regularization method is usually applied to the hidden layer for a better representation of the dataset and sparsification of the network. Gradient-descent method is used for solving the resulting optimization problem. Numerical experiments including a simulated approximation problem and three classification problems (Monk, Sonar, and the MNIST data set) have been used to illustrate the algorithm.