location: Current position: Home >> Scientific Research >> Paper Publications

Multi-view Sparsity Preserving Projection for dimension reduction

Hits:

Indexed by:期刊论文

Date of Publication:2016-12-05

Journal:NEUROCOMPUTING

Included Journals:SCIE、EI、Scopus

Volume:216

Page Number:286-295

ISSN No.:0925-2312

Key Words:Multi-view; Dimension reduction; Sparse subspace learning; Multi-view Sparsity Preserving Projection; Sparse representation

Abstract:In the past decade, we have witnessed a surge of interests of learning a low-dimensional subspace for dimension reduction (DR). However, facing with features from multiple views, most DR methods fail to integrate compatible and complementary information from multi-view features to construct low-dimensional subspace. Meanwhile, multi-view features always locate in different dimensional spaces which challenges multi-view subspace learning. Therefore, how to learn one common subspace which can exploit information from multi-view features is of vital importance but challenging. To address this issue, we propose a multi-view sparse subspace learning method called Multi-view Sparsity Preserving Projection (MvSPP) in this paper. MvSPP seeks to find a set of linear transforms to project multi-view features into one common low-dimensional subspace where multi-view sparse reconstructive weights are preserved as much as possible. Therefore, MvSPP can avoid incorrect sparse correlations which are caused by the global property of sparse representation from one single view. A co-regularization scheme is designed to integrate multi-view features to seek one common subspace which is consistent across multiple views. An iterative alternating strategy is presented to obtain the optimal solution of MvSPP. Various experiments on multi-view datasets show the excellent performance of this novel method. (C) 2016 Elsevier B.V. All rights reserved.

Pre One:基于基元相关性描述子的图像检索

Next One:Metric learning with geometric mean for similarities measurement