location: Current position: Home >> Scientific Research >> Paper Publications

Dense and Sparse Reconstruction Error Based Saliency Descriptor

Hits:

Indexed by:期刊论文

Date of Publication:2016-04-01

Journal:IEEE TRANSACTIONS ON IMAGE PROCESSING

Included Journals:SCIE、EI、ESI高被引论文

Volume:25

Issue:4

Page Number:1592-1603

ISSN No.:1057-7149

Key Words:Saliency detection; dense/sparse reconstruction error; sparse representation; context-based propagation; region compactness; Bayesian integration

Abstract:In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. First, we compute dense and sparse reconstruction errors on the background templates for each image region. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, the pixel-level reconstruction error is computed by the integration of multi-scale reconstruction errors. Both the pixel-level dense and sparse reconstruction errors are then weighted by image compactness, which could more accurately detect saliency. In addition, we introduce a novel Bayesian integration method to combine saliency maps, which is applied to integrate the two saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against 24 state-of-the-art methods in terms of precision, recall, and F-measure on three public standard salient object detection databases.

Pre One:STCT: Sequentially Training Convolutional Networks for Visual Tracking

Next One:Fixation prediction with a combined model of bottom-up saliency and vanishing point