Hits:
Indexed by:期刊论文
Date of Publication:2018-01-01
Journal:IEEE TRANSACTIONS ON IMAGE PROCESSING
Included Journals:SCIE、EI、Scopus
Volume:27
Issue:1
Page Number:335-348
ISSN No.:1057-7149
Key Words:Low-rank representation; subspace learning; large-scale data; dynamic data; online learning
Abstract:Benefiting from global rank constraints, the low-rank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-of-sample classification problem and is less robust to noise. In this paper, a novel online LRR subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the LRR matrix can also be incrementally solved by an efficient online singular value decomposition algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods, including the batch LRR, and significantly outperforms state-of-the-art online methods.