location: Current position: Home >> Scientific Research >> Paper Publications

SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L-1/2 REGULARIZATION

Hits:

Indexed by:期刊论文

Date of Publication:2018-01-01

Journal:NEURAL NETWORK WORLD

Included Journals:SCIE

Volume:28

Issue:2

Page Number:133-147

ISSN No.:1210-0552

Key Words:autoencoder; sparse representation; unsupervised feature learning; deep network; L-1/2 regularization

Abstract:Autoencoder networks have been demonstrated to be efficient for unsupervised learning of representation of images, documents and time series. Sparse representation can improve the interpretability of the input data and the generalization of a model by eliminating redundant features and extracting the latent structure of data. In this paper, we use L-1/2 regularization method to enforce sparsity on the hidden representation of an autoencoder for achieving sparse representation of data. The performance of our approach in terms of unsupervised feature learning and supervised classification is assessed on the MNIST digit data set, the ORL face database and the Reuters-21578 text corpus. The results demonstrate that the proposed autoencoder can produce sparser representation and better reconstruction performance than the Sparse Autoencoder and the L-1 regularization Autoencoder. The new representation is also illustrated to be useful for a deep network to improve the classification performance.

Pre One:Subspace Clustering With K-Support Norm

Next One:Enhancement of the Variable Universe of Discourse Control By Hammersley Sequence-Based TP Model Transformation