location: Current position: Home >> Scientific Research >> Paper Publications

Multimedia feature mapping and correlation learning for cross-modal retrieval

Hits:

Indexed by:Journal Papers

Date of Publication:2018-01-01

Journal:International Journal of Grid and High Performance Computing

Volume:10

Issue:3

Page Number:29-45

ISSN No.:19380259

Abstract:This article describes how with the rapid increasing of multimedia content on the Internet, the need for effective cross-modal retrieval has attracted much attention recently. Many related works ignore the latent semantic correlations of modalities in the non-linear space and the extraction of high-level modality features, which only focuses on the semantic mapping of modalities in linear space and the use of low-level artificial features as modality feature representation. To solve these issues, the authors first utilizes convolutional neural networks and topic modal to obtain a high-level semantic feature of various modalities. Sequentially, they propose a supervised learning algorithm based on a kernel with partial least squares that can capture semantic correlations across modalities. Finally, the joint model of different modalities is learnt by the training set. Extensive experiments are conducted on three benchmark datasets that include Wikipedia, Pascal and MIRFlickr. The results show that the proposed approach achieves better retrieval performance over several state-of-the-art approaches. Copyright © 2018, IGI Global.

Pre One:Incremental CFS clustering on large data

Next One:Distributed Feature Selection for Efficient Economic Big Data Analysis