location: Current position: Home >> Scientific Research >> Paper Publications

STCT: Sequentially Training Convolutional Networks for Visual Tracking

Hits:

Indexed by:会议论文

Date of Publication:2016-06-26

Included Journals:EI、CPCI-S、SCIE

Volume:2016-December

Page Number:1373-1381

Abstract:Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features.
   The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.

Pre One:HYPERSPECTRAL IMAGES BAND SELECTION USING MULTI-DICTIONARY BASED SPARSE REPRESENTATION

Next One:Dense and Sparse Reconstruction Error Based Saliency Descriptor