location: Current position: Home >> Scientific Research >> Paper Publications

Feedforward Neural Networks with a Hidden Layer Regularization Method

Hits:

Indexed by:期刊论文

Date of Publication:2018-10-01

Journal:SYMMETRY-BASEL

Included Journals:SCIE

Volume:10

Issue:10

ISSN No.:2073-8994

Key Words:sparsity; feedforward neural networks; hidden layer regularization; group lasso; lasso

Abstract:In this paper, we propose a group Lasso regularization term as a hidden layer regularization method for feedforward neural networks. Adding a group Lasso regularization term into the standard error function as a hidden layer regularization term is a fruitful approach to eliminate the redundant or unnecessary hidden layer neurons from the feedforward neural network structure. As a comparison, a popular Lasso regularization method is introduced into standard error function of the network. Our novel hidden layer regularization method can force a group of outgoing weights to become smaller during the training process and can eventually be removed after the training process. This means it can simplify the neural network structure and it minimizes the computational cost. Numerical simulations are provided by using K-fold cross-validation method with k = 5 to avoid overtraining and to select the best learning parameters. The numerical results show that our proposed hidden layer regularization method prunes more redundant hidden layer neurons consistently for each benchmark dataset without loss of accuracy. In contrast, the existing Lasso regularization method prunes only the redundant weights of the network, but it cannot prune any redundant hidden layer neurons.

Pre One:A New Conjugate Gradient Method with Smoothing L-1/2 Regularization Based on a Modified Secant Equation for Training Neural Networks

Next One:Multi-functional nearest-neighbour classification