卢湖川

个人信息Personal Information

教授

博士生导师

硕士生导师

主要任职:未来技术学院/人工智能学院执行院长

性别:男

毕业院校:大连理工大学

学位:博士

所在单位:信息与通信工程学院

学科:信号与信息处理

办公地点:大连理工大学未来技术学院/人工智能学院218

联系方式:****

电子邮箱:lhchuan@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Hyperfusion-Net: Hyper-densely reflective feature fusion for salient object detection

点击次数:

论文类型:期刊论文

发表时间:2019-09-01

发表刊物:PATTERN RECOGNITION

收录刊物:SCIE、EI

卷号:93

页面范围:521-533

ISSN号:0031-3203

关键字:Salient object detection; Image reflection separation; Multiple feature fusion; Convolutional Neural Network

摘要:Salient Object Detection (SOD), which aims to find the most important region of interest and segment the relevant objects/items in that region, is an important yet challenging task in computer vision and image processing. This vision problem is inspired by the fact that human perceives the main scene elements with high priorities. Thus, accurate detection of salient objects in complex scenes is critical for human computer interaction. In this paper, we present a novel reflective feature learning framework, which results in high detection accuracy while maintaining a compact model design. The proposed framework utilizes a hyper-densely reflective feature fusion network (named HyperFusion-Net) to automatically predict the most important area and segment the associated objects in an end-to-end manner. Specifically, inspired by the human perception system and image reflection separation, we first decompose the input images into reflective image pairs by content-preserving transforms. Then, the complementary information of reflective image pairs is jointly extracted by an Interweaved Convolutional Neural Network (ICNN) and hierarchically combined with a hyper-dense fusion mechanism. Based on the fused multi-scale features, our method finally achieves a promising way of predicting salient objects, in which we cast the SOD as a pixel-wise classification problem. Extensive experiments on seven public datasets demonstrate that the proposed method consistently outperforms other state-of-the-art methods with a large margin. (C) 2019 Elsevier Ltd. All rights reserved.