Current position: Home >> Scientific Research >> Paper Publications

Smooth group L-1/2 regularization for input layer of feedforward neural networks

Release Time:2019-03-12  Hits:

Indexed by: Journal Article

Date of Publication: 2018-11-07

Journal: NEUROCOMPUTING

Included Journals: Scopus、SCIE

Volume: 314

Page Number: 109-119

ISSN: 0925-2312

Key Words: Feedforward neural network; Input layer compression; Feature selection; Smooth group L-1/2 regularization; Convergence

Abstract: A smooth group regularization method is proposed to identify and remove the redundant input nodes of feedforward neural networks, or equivalently the redundant dimensions of the input data of a given data set. This is achieved by introducing a smooth group L-1/2 regularizer with respect to the input nodes into the error function to drive some weight vectors of the input nodes to zero. The main advantage of the method is that it can remove not only the redundant nodes, but also some redundant weights of the surviving nodes. As a comparison, the L-1 regularization (Lasso) is mainly designed for removing the redundant weights, and it is not very good at removing the redundant nodes. And the group Lasso can remove the redundant nodes, but not any weight of the surviving nodes. Another advantage of the proposed method is that it uses a smooth function to replace the non-smooth absolute value function in the common L-1/2 regularizer, and thus it reduces the oscillation caused by the non-smoothness and enables us to prove the convergence properties of the proposed training algorithm. Numerical simulations are performed to illustrate the efficiency of the algorithm. (C) 2018 Elsevier B.V. All rights reserved.

Prev One:Group L-1(/2) Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks

Next One:A New Conjugate Gradient Method with Smoothing L-1/2 Regularization Based on a Modified Secant Equation for Training Neural Networks