• 其他栏目

    宁兆龙

    • 副教授     硕士生导师
    • 主要任职:无
    • 性别:男
    • 毕业院校:东北大学
    • 学位:博士
    • 在职信息:在职
    • 所在单位:软件学院
    • 学科:软件工程 通信与信息系统
    • 联系方式:zhaolongning@dlut.edu.cn
    • 电子邮箱:

    访问量:

    开通时间 :..

    最后更新时间:..

    Combinative hypergraph learning in subspace for cross-modal ranking

    点击量:

    论文类型:期刊论文

    第一作者:Zhong, Fangming

    通讯作者:Zhong, FM (reprint author), Dalian Univ Technol, Sch Software Technol, Dalian, Peoples R China.

    合写作者:Chen, Zhikui,Min, Geyong,Ning, Zhaolong,Zhong, Hua,Hu, Yueming

    发表时间:2018-10-01

    发表刊物:MULTIMEDIA TOOLS AND APPLICATIONS

    收录刊物:SCIE

    卷号:77

    期号:19

    页面范围:25959-25982

    ISSN号:1380-7501

    关键字:Cross-modal ranking; Subspace learning; Hypergraph; Similarity preserving

    摘要:Recent years have witnessed a surge of interests in cross-modal ranking. To bridge the gap between heterogeneous modalities, many projection based methods have been studied to learn common subspace where the correlation across different modalities can be directly measured. However, these methods generally consider pair-wise relationship merely, while ignoring the high-order relationship. In this paper, a combinative hypergraph learning in subspace for cross-modal ranking (CHLS) is proposed to enhance the performance of cross-modal ranking by capturing high-order relationship. We formulate the cross-modal ranking as a hypergraph learning problem in latent subspace where the high-order relationship among ranking instances can be captured. Furthermore, we propose a combinative hypergraph based on fused similarity information to encode both the intra-similarity in each modality and the inter-similarity across different modalities into the compact subspace representation, which can further enhance the performance of cross-modal ranking. Experiments on three representative cross-modal datasets show the effectiveness of the proposed method for cross-modal ranking. Furthermore, the ranking results achieved by the proposed CHLS can recall 80% of the relevant cross-modal instances at a much earlier stage compared against state-of-the-art methods for both cross-modal ranking tasks, i.e. image query text and text query image.