Hits:
Indexed by:期刊论文
Date of Publication:2017-08-01
Journal:ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Included Journals:Scopus、SCIE、EI、ESI高被引论文
Volume:13
Issue:3
ISSN No.:1551-6857
Key Words:Deep learning; Tucker decomposition; deep computation; mobile multimedia; back-propagation
Abstract:Recently, the deep computation model, as a tensor deep learning model, has achieved super performance for multimedia feature learning. However, the conventional deep computation model involves a large number of parameters. Typically, training a deep computation model with millions of parameters needs high-performance servers with large-scale memory and powerful computing units, limiting the growth of the model size for multimedia feature learning on common devices such as portable CPUs and conventional desktops. To tackle this problem, this article proposes a Tucker deep computation model by using the Tucker decomposition to compress the weight tensors in the full-connected layers for multimedia feature learning. Furthermore, a learning algorithm based on the back-propagation strategy is devised to train the parameters of the Tucker deep computation model. Finally, the performance of the Tucker deep computation model is evaluated by comparing with the conventional deep computation model on two representative multimedia datasets, that is, CUAVE and SNAE2, in terms of accuracy drop, parameter reduction, and speedup in the experiments. Results imply that the Tucker deep computation model can achieve a large-parameter reduction and speedup with a small accuracy drop for multimedia feature learning.