卢湖川

个人信息Personal Information

教授

博士生导师

硕士生导师

主要任职:未来技术学院/人工智能学院执行院长

性别:男

毕业院校:大连理工大学

学位:博士

所在单位:信息与通信工程学院

学科:信号与信息处理

办公地点:大连理工大学未来技术学院/人工智能学院218

联系方式:****

电子邮箱:lhchuan@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Video Person Re-Identification by Temporal Residual Learning

点击次数:

论文类型:期刊论文

发表时间:2019-03-01

发表刊物:IEEE TRANSACTIONS ON IMAGE PROCESSING

收录刊物:SCIE、Scopus

卷号:28

期号:3

页面范围:1366-1377

ISSN号:1057-7149

关键字:Person re-identification; spatial-temporal transformation; temporal residual learning

摘要:In this paper, we propose a novel feature learning framework for video person re-identification (re-ID). The proposed framework largely aims to exploit the adequate temporal information of video sequences and tackle the poor spatial alignment of moving pedestrians. More specifically, for exploiting the temporal information, we design a temporal residual learning (TRL) module to simultaneously extract the generic and specific features of consecutive frames. The TRL module is equipped with two bi-directional LSTM (BiLSTM), which are, respectively, responsible to describe a moving person in different aspects, providing complementary information for better feature representations. To deal with the poor spatial alignment in video re-ID data sets, we propose a spatial-temporal transformer network ((STN)-N-2) module. Transformation parameters in the (STN)-N-2 module are learned by leveraging the high-level semantic information of the current frame as well as the temporal context knowledge from other frames. The proposed (STN)-N-2 module with less learnable parameters allows effective person alignments under significant appearance changes. Extensive experimental results on the large-scale MARS, PRID2011, ILIDS-VID, and SDU-VID data sets demonstrate that the proposed method achieves consistently superior performance and outperforms most of the very recent state-of-the-art methods.