王洪玉

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:男

毕业院校:天津大学

学位:博士

所在单位:信息与通信工程学院

学科:通信与信息系统. 信号与信息处理

办公地点:大连理工大学创新园大厦B510

联系方式:电子邮箱:whyu@dlut.edu.cn 办公电话:0411-84707675 移动电话:13842827170

电子邮箱:whyu@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Person re-identification by multiple instance metric learning with impostor rejection

点击次数:

论文类型:期刊论文

发表时间:2017-07-01

发表刊物:PATTERN RECOGNITION

收录刊物:SCIE、EI、Scopus

卷号:67

页面范围:287-298

ISSN号:0031-3203

关键字:Person re-identification; Graphical model; Multiple instance metric learning; Impostor rejection

摘要:Due to its ability to eliminate the visual ambiguities in single-shot algorithms, video-based person re identification has received an increasing focus in computer vision. Visual ambiguities caused by variations in view angle, lighting, and occlusions make the re-identification problem extremely challenging. To overcome the ambiguities, most previous approaches often extract robust feature representations or learn a sophisticated feature transformation. However, most of these approaches ignore the effect of the impostors arising from annotation or tracking process. In this case, impostors are regarded as genuine and applied in training process, leading to the model drift problem. In order to reduce the risk of model drifting, we propose to automatically discover impostors in a multiple instance metric learning framework. Specifically, we propose a kNN based confidence score to evaluate how much an impostor invades the interested target and utilize it as a prior in the framework. In the meanwhile, we integrate an impostor rejection mechanism in the multiple instance metric learning framework to automatically discover impostors, and learn the semantical similarity metrics with the refined training set. Experiments show that the proposed system performs favorably against the state-of-the-art algorithms on two challenging datasets (iLIDS-VID and PRID 2011). We have improved the rank 1 recognition rate on iLIDS-VID and PRID 2011 dataset by 1.0% and 1.2%, respectively. (C) 2017 Elsevier Ltd. All rights reserved.