个人信息Personal Information
教授
博士生导师
硕士生导师
主要任职:未来技术学院/人工智能学院执行院长
性别:男
毕业院校:大连理工大学
学位:博士
所在单位:信息与通信工程学院
学科:信号与信息处理
办公地点:大连理工大学未来技术学院/人工智能学院218
联系方式:****
电子邮箱:lhchuan@dlut.edu.cn
Deep visual tracking: Review and experimental comparison
点击次数:
论文类型:期刊论文
发表时间:2018-04-01
发表刊物:PATTERN RECOGNITION
收录刊物:SCIE、EI
卷号:76
页面范围:323-338
ISSN号:0031-3203
关键字:Visual tracking; Deep learning; CNN; RNN; Pre-training; Online learning
摘要:Recently, deep learning has achieved great success in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on deep learning. First, we introduce the background of deep visual tracking, including the fundamental concepts of visual tracking and related deep learning algorithms. Second, we categorize the existing deep-learning-based trackers into three classes according to network structure, network function and network training. For each categorize, we explain its analysis of the network perspective and analyze papers in different categories. Then, we conduct extensive experiments to compare the representative methods on the popular OTB-100, TC-128 and VOT2015 benchmarks. Based on our observations, we conclude that: (1) The usage of the convolutional neural network (CNN) model could significantly improve the tracking performance. (2) The trackers using the convolutional neural network (CNN) model to distinguish the tracked object from its surrounding background could get more accurate results, while using the CNN model for template matching is usually faster. (3) The trackers with deep features perform much better than those with low-level hand-crafted features. (4) Deep features from different convolutional layers have different characteristics and the effective combination of them usually results in a more robust tracker. (5) The deep visual trackers using end-to-end networks usually perform better than the trackers merely using feature extraction networks. (6) For visual tracking, the most suitable network training method is to per-train networks with video information and online fine-tune them with subsequent observations. Finally, we summarize our manuscript and highlight our insights, and point out the further trends for deep visual tracking. (C) 2017 Elsevier Ltd. All rights reserved.