刘秀平

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:女

毕业院校:大连理工大学

学位:博士

所在单位:数学科学学院

电子邮箱:xpliu@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Online Low-Rank Representation Learning for Joint Multi-Subspace Recovery and Clustering

点击次数:

论文类型:期刊论文

发表时间:2018-01-01

发表刊物:IEEE TRANSACTIONS ON IMAGE PROCESSING

收录刊物:SCIE、EI、Scopus

卷号:27

期号:1

页面范围:335-348

ISSN号:1057-7149

关键字:Low-rank representation; subspace learning; large-scale data; dynamic data; online learning

摘要:Benefiting from global rank constraints, the low-rank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-of-sample classification problem and is less robust to noise. In this paper, a novel online LRR subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the LRR matrix can also be incrementally solved by an efficient online singular value decomposition algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods, including the batch LRR, and significantly outperforms state-of-the-art online methods.