吴微

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:英国牛津大学数学所

学位:博士

所在单位:数学科学学院

学科:计算数学

电子邮箱:wuweiw@dlut.edu.cn

扫描关注

论文成果

当前位置: 吴微 >> 科学研究 >> 论文成果

A modified gradient learning algorithm with smoothing L-1/2 regularization for Takagi-Sugeno fuzzy models

点击次数:

论文类型:期刊论文

发表时间:2014-08-22

发表刊物:NEUROCOMPUTING

收录刊物:SCIE、EI、Scopus

卷号:138

页面范围:229-237

ISSN号:0925-2312

关键字:Takagi-Sugeno (T-S) fuzzy models; Gradient descent method; Convergence; Gaussian-type membership function; Variable selection; Regularizer

摘要:A popular and feasible approach to determine the appropriate size of a neural network is to remove unnecessary connections from an oversized network. The advantage of L-1/2 regularization has been recognized for sparse modeling. However, the nonsmoothness of L-1/2 regularization may lead to oscillation phenomenon. An approach with smoothing L-1/2 regularization is proposed in this paper for Takagi-Sugeno (T-S) fuzzy models, in order to improve the learning efficiency and to promote sparsity of the models. The new smoothing L-1/2 regularizer removes the oscillation. Besides, it also enables us to prove the weak and strong convergence results for T-S fuzzy neural networks with zero-order. Furthermore, a relationship between the learning rate parameter and the penalty parameter is given to guarantee the convergence. Simulation results are provided to support the theoretical findings, and they show the superiority of the smoothing L-1/2 regularization over the original L-1/2 regularization. (C) 2014 Elsevier B.V. All rights reserved.