陈志奎

个人信息Personal Information

教授

博士生导师

硕士生导师

主要任职:teaching

性别:男

毕业院校:重庆大学

学位:博士

所在单位:软件学院、国际信息与软件学院

学科:软件工程. 计算机软件与理论

办公地点:开发区综合楼405

联系方式:Email: zkchen@dlut.edu.cn Moble:13478461921 微信:13478461921 QQ:1062258606

电子邮箱:zkchen@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Unsupervised multi-view non-negative for law data feature learning with dual graph-regularization in smart Internet of Things

点击次数:

论文类型:期刊论文

发表时间:2019-11-01

发表刊物:FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE

收录刊物:SCIE、EI

卷号:100

页面范围:523-530

ISSN号:0167-739X

关键字:Multi-view learning; Dual graph regularization; Non-negative matrix factorization; Smart Internet of Things

摘要:In the real world, the law data in the smart Internet of Things usually consists of heterogeneous information with some noises. Non-negative matrix factorization is a popular tool for multi-view learning, which can be employed to represent and learn heterogeneous law features comprehensively. However, current NMF-based techniques generally use clean multi-view datasets to generate common subspace, while in practice, they often contain some noises or unrelated items so that the performance of the algorithms may be severely degraded. In this paper, we propose to develop a novel subspace learning model, called Adaptive Dual Graph-regularized Multi-View Non-Negative Feature Learning (ADMFL), for multi-view data representation. We utilize the geometric structures of both data and feature manifold to model the distribution of data points in the common subspace. Meanwhile, we lift the effect of unrelated features down through separating the view-specific features for each view. Moreover, we introduce a weight factor for all views and maintain the sparsity of the latent common representation. An effective objective function is thus designed and iteratively updated until convergence. Experiments on standard datasets demonstrate that the proposed ADMFL method outperforms other compared methods in the paper. (C) 2019 Elsevier B.V. All rights reserved.