zkRqDXUO7XdgrLdBORvvfXW0s98WgiVSK0SpHD83uOJGhqS9jRDug3LtTrkK
Current position: Home >> Scientific Research >> Paper Publications

Convergence of gradient method for Elman networks

Release Time:2019-03-10  Hits:

Indexed by: Journal Article

Date of Publication: 2008-09-01

Journal: APPLIED MATHEMATICS AND MECHANICS-ENGLISH EDITION

Included Journals: Scopus、EI、SCIE

Volume: 29

Issue: 9

Page Number: 1231-1238

ISSN: 0253-4827

Key Words: Elman network; gradient learning algorithm; convergence; monotonicity

Abstract: The gradient method for training Elman networks with a finite training sample set is considered. Monotonicity of the error function in the iteration is shown. Weak and strong convergence results are proved, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. A numerical example is given to support the theoretical findings.

Prev One:L(p) approximation capability of RBF neural networks

Next One:Convergence of gradient method with momentum for back-propagation neural networks