42yAFbwjJunnP5cnN3SggaI2SDEDsi6YVkIwu2Xgk46REaTWx7py8NfJ4ixB
Current position: Home >> Scientific Research >> Paper Publications

A New Conjugate Gradient Method with Smoothing L-1/2 Regularization Based on a Modified Secant Equation for Training Neural Networks

Release Time:2019-03-12  Hits:

Indexed by: Journal Article

Date of Publication: 2018-10-01

Journal: NEURAL PROCESSING LETTERS

Included Journals: SCIE

Volume: 48

Issue: 2,SI

Page Number: 955-978

ISSN: 1370-4621

Key Words: Feedforward neural networks; Conjugate gradient method; Modified secant equation; Regularization; Global convergence

Abstract: Proposed in this paper is a new conjugate gradient method with smoothing L-1/2 regularization based on a modified secant equation for training neural networks, where a descent search direction is generated by selecting an adaptive learning rate based on the strong Wolfe conditions. Two adaptive parameters are introduced such that the new training method possesses both quasi-Newton property and sufficient descent property. As shown in the numerical experiments for five benchmark classification problems from UCI repository, compared with the other conjugate gradient training algorithms, the new training algorithm has roughly the same or even better learning capacity, but significantly better generalization capacity and network sparsity. Under mild assumptions, a global convergence result of the proposed training method is also proved.

Prev One:Smooth group L-1/2 regularization for input layer of feedforward neural networks

Next One:Feedforward Neural Networks with a Hidden Layer Regularization Method