location: Current position: Home >> Scientific Research >> Paper Publications

Boundary-Guided Feature Aggregation Network for Salient Object Detection

Hits:

Indexed by:期刊论文

Date of Publication:2018-12-01

Journal:IEEE SIGNAL PROCESSING LETTERS

Included Journals:SCIE、Scopus

Volume:25

Issue:12

Page Number:1800-1804

ISSN No.:1070-9908

Key Words:Attention; boundary information extraction; feature fusion; salient object detection

Abstract:Fully convolutional networks (FCN) has significantly improved the performance of many pixel-labeling tasks, such as semantic segmentation and depth estimation. However, it still remains nontrivial to thoroughly utilize the multilevel convolutional feature maps and boundary information for salient object detection. In this letter, we propose a novel FCN framework to integrate multilevel convolutional features recurrently with the guidance of object boundary information. First, a deep convolutional network is used to extract multilevel feature maps and separately aggregate them into multiple resolutions, which can he used to generate coarse saliency maps. Meanwhile, another boundary information extraction branch is proposed to generate boundary features. Finally, an attention-based feature fusion module is designed to fuse boundary information into salient regions to achieve accurate boundary inference and semantic enhancement. The final saliency maps are the combination of the predicted boundary maps and integrated saliency maps, which are more closer to the ground truths. Experiments and analysis on four large-scale benchmarks verify that our framework achieves new state-of-the-art results.

Pre One:Tensor Completion From One-Bit Observations

Next One:Deep multi-level networks with multi-task learning for saliency detection