location: Current position: Home >> Scientific Research >> Paper Publications

Semantic-Guided Hashing for Cross-Modal Retrieval

Hits:

Indexed by:会议论文

Date of Publication:2019-01-01

Included Journals:EI

Page Number:182-190

Key Words:cross-modal hashing; label semantics; zero-shot hashing; discriminative binary codes

Abstract:In the Big Data era, information retrieval across heterogeneous data or multimodal data is a very significant issue. Cross-modal hashing has recently attracted increasing attention for multimodal retrieval with benefits of fast retrieval efficiency and low storage cost. Many supervised cross-modal hashing approaches have been explored to achieve better performance according to label information. However, most of these existing methods take the form of 0/1 binary labels or pairwise relationships as supervised information, resulting in the neglect of valuable semantic correction among different classes. To address this problem, we propose a novel two-step supervised cross-modal hashing approach, termed Semantic Guided Hashing (SeGH), to obtain the discriminative binary codes. Particularly, in Step 1, our method takes the encoder-decoder paradigm based on label semantics obtained by the word vector of class names to learn the discriminative projection from original feature space to common semantic space. In Step 2, semantic representations of different modalities in the common space are projected into a Hamming space while preserving intra-modality and inter-modality similarity. Extensive experiments compared against several state-of-the-art baselines on two datasets highlight the superiority of the proposed SeGH for cross-modal retrieval, and also demonstrate its effectiveness for zero-shot cross-modal retrieval.

Pre One:A Double Deep Q-Learning Model for Energy-Efficient Edge Scheduling

Next One:Incremental CFS clustering on large data