location: Current position: Home >> Scientific Research >> Paper Publications

Video Person Re-Identification by Temporal Residual Learning

Hits:

Indexed by:期刊论文

Date of Publication:2019-03-01

Journal:IEEE TRANSACTIONS ON IMAGE PROCESSING

Included Journals:SCIE、Scopus

Volume:28

Issue:3

Page Number:1366-1377

ISSN No.:1057-7149

Key Words:Person re-identification; spatial-temporal transformation; temporal residual learning

Abstract:In this paper, we propose a novel feature learning framework for video person re-identification (re-ID). The proposed framework largely aims to exploit the adequate temporal information of video sequences and tackle the poor spatial alignment of moving pedestrians. More specifically, for exploiting the temporal information, we design a temporal residual learning (TRL) module to simultaneously extract the generic and specific features of consecutive frames. The TRL module is equipped with two bi-directional LSTM (BiLSTM), which are, respectively, responsible to describe a moving person in different aspects, providing complementary information for better feature representations. To deal with the poor spatial alignment in video re-ID data sets, we propose a spatial-temporal transformer network ((STN)-N-2) module. Transformation parameters in the (STN)-N-2 module are learned by leveraging the high-level semantic information of the current frame as well as the temporal context knowledge from other frames. The proposed (STN)-N-2 module with less learnable parameters allows effective person alignments under significant appearance changes. Extensive experimental results on the large-scale MARS, PRID2011, ILIDS-VID, and SDU-VID data sets demonstrate that the proposed method achieves consistently superior performance and outperforms most of the very recent state-of-the-art methods.

Pre One:Multi-scale Pyramid Pooling Network for salient object detection

Next One:Tensor Completion From One-Bit Observations