location: Current position: Home >> Scientific Research >> Paper Publications

Deep visual tracking: Review and experimental comparison

Hits:

Indexed by:期刊论文

Date of Publication:2018-04-01

Journal:PATTERN RECOGNITION

Included Journals:SCIE、EI

Volume:76

Page Number:323-338

ISSN No.:0031-3203

Key Words:Visual tracking; Deep learning; CNN; RNN; Pre-training; Online learning

Abstract:Recently, deep learning has achieved great success in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on deep learning. First, we introduce the background of deep visual tracking, including the fundamental concepts of visual tracking and related deep learning algorithms. Second, we categorize the existing deep-learning-based trackers into three classes according to network structure, network function and network training. For each categorize, we explain its analysis of the network perspective and analyze papers in different categories. Then, we conduct extensive experiments to compare the representative methods on the popular OTB-100, TC-128 and VOT2015 benchmarks. Based on our observations, we conclude that: (1) The usage of the convolutional neural network (CNN) model could significantly improve the tracking performance. (2) The trackers using the convolutional neural network (CNN) model to distinguish the tracked object from its surrounding background could get more accurate results, while using the CNN model for template matching is usually faster. (3) The trackers with deep features perform much better than those with low-level hand-crafted features. (4) Deep features from different convolutional layers have different characteristics and the effective combination of them usually results in a more robust tracker. (5) The deep visual trackers using end-to-end networks usually perform better than the trackers merely using feature extraction networks. (6) For visual tracking, the most suitable network training method is to per-train networks with video information and online fine-tune them with subsequent observations. Finally, we summarize our manuscript and highlight our insights, and point out the further trends for deep visual tracking. (C) 2017 Elsevier Ltd. All rights reserved.

Pre One:Spectral-spatial K-Nearest Neighbor approach for hyperspectral image classification

Next One:Cross-view semantic projection learning for person re-identification