Hits:
Date of Publication:2015-01-01
Journal:中国科学 数学
Affiliation of Author(s):数学科学学院
Volume:45
Issue:9
Page Number:1487-1504
ISSN No.:1674-7216
Abstract:On the premise of appropriate learning accuracy, the number of the
neurons of a neural network should be as less as possible
(constructional sparsification), so as to reduce the cost, and to
improve the robustness and the generalization accuracy. We study the
constructional sparsification of feedforward neural networks by using
regularization methods. Apart from the traditional L1 regularization for
sparsification, we mainly use the L_(1/2) regularization. To remove the
oscillation in the iteration process due to the nonsmoothness of the
L_(1/2) regularizer, we propose to smooth it in a neighborhood of the
nonsmooth point to get a smoothing L_(1/2) regularizer. By doing so, we
expect to improve the efficiency of the L_(1/2) regularizer so as to
surpass the L1 regularizer. Some of our recent works in this respect are
summarized in this paper, including the works on BP feedforward neural
networks, higher order neural networks, double parallel neural networks
and Takagi-Sugeno fuzzy models.
Note:新增回溯数据