location: Current position: Home >> Scientific Research >> Paper Publications

A Survey on Deep Learning for Multimodal Data Fusion

Hits:

Indexed by:Journal Papers

Date of Publication:2020-03-18

Journal:Neural computation

Included Journals:PubMed

Page Number:1-36

ISSN No.:1530-888X

Abstract:With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.

Pre One:Energy-Efficient Scheduling for Real-Time Systems Based on Deep Q-Learning Model

Next One:Cross-Modal Retrieval for CPSS Data