张超 (教授)

教授   博士生导师   硕士生导师

性别:男

毕业院校:大连理工大学

学位:博士

所在单位:数学科学学院

学科:计算数学

办公地点:创新园#A1024

联系方式:0411-84708351

电子邮箱:chao.zhang@dlut.edu.cn

Training pi-sigma network by online gradient algorithm with penalty for small weight update

点击次数:

论文类型:期刊论文

发表时间:2007-12-01

发表刊物:NEURAL COMPUTATION

收录刊物:SCIE、PubMed、Scopus

卷号:19

期号:12

页面范围:3356-3368

ISSN号:0899-7667

摘要:A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter.

发表时间:2007-12-01

上一条: Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks

下一条: Convergence of BP algorithm for product unit neural networks with exponential weights