location: Current position: Home >> Scientific Research >> Paper Publications

An attention mechanism based convolutional LSTM network for video action recognition

Hits:

Indexed by:Journal Papers

Date of Publication:2019-07-01

Journal:MULTIMEDIA TOOLS AND APPLICATIONS

Included Journals:SCIE、EI

Volume:78

Issue:14

Page Number:20533-20556

ISSN No.:1380-7501

Key Words:Attention mechanism; Convolutional LSTM; Spatial transformer; Video action recognition

Abstract:As an important issue in video classification, human action recognition is becoming a hot topic in computer vision. The ways of effectively representing the spatial static and temporal dynamic information of videos are important problems in video action recognition. This paper proposes an attention mechanism based convolutional LSTM action recognition algorithm to improve the accuracy of recognition by extracting the salient regions of actions in videos effectively. First, GoogleNet is used to extract the features of video frames. Then, those feature maps are processed by the spatial transformer network for the attention. Finally the sequential information of the features is modeled via the convolutional LSTM to classify the action in the original video. To accelerate the training speed, we adopt the analysis of temporal coherence to reduce the redundant features extracted by GoogleNet with trivial accuracy loss. In comparison with the state-of-the-art algorithms for video action recognition, competitive results are achieved on three widely-used datasets, UCF-11, HMDB-51 and UCF-101. Moreover, by using the analysis of temporal coherence, desirable results are obtained while the training time is reduced.

Pre One:A Many-Objective Evolutionary Algorithm With Two Interacting Processes: Cascade Clustering and Reference Point Incremental Learning

Next One:Non-negative matrix factorization based modeling and training algorithm for multi-label learning