![]() |
个人信息Personal Information
教授
博士生导师
硕士生导师
主要任职:teaching
性别:男
毕业院校:重庆大学
学位:博士
所在单位:软件学院、国际信息与软件学院
学科:软件工程. 计算机软件与理论
办公地点:开发区综合楼405
联系方式:Email: zkchen@dlut.edu.cn Moble:13478461921 微信:13478461921 QQ:1062258606
电子邮箱:zkchen@dlut.edu.cn
Supervised Intra- and Inter-Modality Similarity Preserving Hashing for Cross-Modal Retrieval
点击次数:
论文类型:期刊论文
发表时间:2018-01-01
发表刊物:IEEE ACCESS
收录刊物:SCIE
卷号:6
页面范围:27796-27808
ISSN号:2169-3536
关键字:Cross-modal retrieval; matrix factorization; similarity preserving hashing; alternative optimization
摘要:Cross-modal hashing has drawn considerable interest in multimodal retrieval due to the explosive growth of big data on multimedia. However, the existing methods mainly focus on unified hash codes learning and investigate the local geometric structure in the original space, resulting in low-discriminative power hash code of out-of-sample instances. To address this important problem, this paper is dedicated to investigate the hashing functions learning by considering the modality correlation preserving in the expected low-dimensional common space. A cross-modal hashing method based on supervised collective matrix factorization is proposed by taking intra-modality and inter-modality similarity preserving into account. For more flexible hashing functions, label information is embedded into the hashing functions learning procedure. Specifically, we explore the intra-modality similarity preserving in the expected low-dimensional common space. In addition, a supervised shrinking scheme is used to enhance the local geometric consistency in each modality. The proposed method learns unified hash codes as well as hashing functions for different modalities; the overall objective function, consisting of collective matrix factorization and intra- and inter-modality similarity embedding, is solved using an alternative optimization in an iterative scheme. Extensive experiments on three benchmark data sets demonstrate that the proposed method is more flexible to new coming data and can achieve superior performance to the state-of-the-art supervised cross-modal hashing approaches in most of the cases.