location: Current position: Home >> Scientific Research >> Paper Publications

Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

Hits:

Indexed by:期刊论文

Date of Publication:2016-03-08

Journal:SPRINGERPLUS

Included Journals:SCIE、PubMed、Scopus

Volume:5

Issue:1

Page Number:295

ISSN No.:2193-1801

Key Words:Feedforward neural networks; Adaptive momentum; Smoothing L-1/2 regularization; Convergence

Abstract:This paper presents new theoretical results on the backpropagation algorithm with smoothing L-1/2 regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.

Pre One:An Efficient Algorithm for Microbiome Sample Visualization Based on UniFrac Distance and Laplace Matrix

Next One:Batch gradient metod for training of Pi-Sigma neural network with penalty