location: Current position: Home >> Scientific Research >> Paper Publications

An Exploration of Cross-Modal Retrieval for Unseen Concepts

Hits:

Indexed by:会议论文

Date of Publication:2019-01-01

Included Journals:CPCI-S、EI

Volume:11447

Page Number:20-35

Key Words:Cross-modal retrieval; Unseen classes; Zero-shot learning

Abstract:Cross-modal hashing has drawn increasing research interests in cross-modal retrieval due to the explosive growth of multimedia big data. However, most of the existing models are trained and tested in a close-set circumstance, which may easily fail on the newly emerged concepts that are never present in the training stage. In this paper, we propose a novel cross-modal hashing model, named Cross-Modal Attribute Hashing (CMAH), which can handle cross-modal retrieval of unseen categories. Inspired by zero-shot learning, attribute space is employed to transfer knowledge from seen categories to unseen categories. Specifically, the cross-modal hashing functions learning and knowledge transfer are conducted by modeling the relationships among features, attributes, and classes as a dual multi-layer network. In addition, graph regularization and binary constraints are imposed to preserve the local structure information in each modality and to reduce quantization loss, respectively. Extensive experiments are carried out on three datasets, and the results demonstrate the effectiveness of CMAH in handling cross-modal retrieval for both seen and unseen concepts.

Pre One:Dual Graph-Regularized Multi-View Feature Learning

Next One:Unsupervised multi-view non-negative for law data feature learning with dual graph-regularization in smart Internet of Things