程明松

个人信息Personal Information

副教授

硕士生导师

性别:男

毕业院校:北京大学

学位:博士

所在单位:数学科学学院

办公地点:创新园大厦A1014

电子邮箱:mscheng@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Convergence analysis of online gradient method for BP neural networks

点击次数:

论文类型:期刊论文

发表时间:2011-01-01

发表刊物:NEURAL NETWORKS

收录刊物:SCIE、EI、PubMed、Scopus

卷号:24

期号:1

页面范围:91-98

ISSN号:0893-6080

关键字:Neural networks; Backpropagation learning; Online gradient method; Weak convergence; Strong convergence

摘要:This paper considers a class of online gradient learning methods for backpropagation (BP) neural networks with a single hidden layer. We assume that in each training cycle, each sample in the training set is supplied in a stochastic order to the network exactly once. It is interesting that these stochastic learning methods can be shown to be deterministically convergent. This paper presents some weak and strong convergence results for the learning methods, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. The conditions on the activation function and the learning rate to guarantee the convergence are relaxed compared with the existing results. Our convergence results are valid for not only S-S type neural networks (both the output and hidden neurons are Sigmoid functions), but also for P-P, P-S and S-P type neural networks, where S and P represent Sigmoid and polynomial functions, respectively. (C) 2010 Elsevier Ltd. All rights reserved.