location: Current position: Home >> Scientific Research >> Paper Publications

Unsupervised detail-preserving network for high quality monocular depth estimation

Hits:

Indexed by:Journal Papers

Date of Publication:2020-09-03

Journal:NEUROCOMPUTING

Included Journals:SCIE

Volume:404

Page Number:1-13

ISSN No.:0925-2312

Key Words:Unsupervised network; Monocular; Depth estimation; Rectangle convolution; Learned composite proximal operator

Abstract:In this paper, we propose an unsupervised learning framework to address the problems of the inaccurate inference of depth details and the loss of spatial information for monocular depth estimation. First, as an unsupervised technique, the proposed framework takes easily collected stereo image pairs instead of ground truth depth data as inputs for training. Second, we design a rectangle convolution to capture global dependencies between neighboring pixels across entire rows or columns in an image, which can bring significant promotion on depth details inference. Third, we propose a learned depth refinement module including a color-guided refinement layer and a learned composite proximal operator to preserve depth discontinuities and obtain high quality depth map. The proposed network is fully differentiable and end-to-end trainable. Extensive experiments evaluated on KITTI, Cityscapes and Make3D dataset demonstrate our state-of-the-art performance and good cross-dataset generalization ability. (C) 2020 Elsevier B.V. All rights reserved.

Pre One:COUPLING PRINCIPLED REFINEMENT WITH BI-DIRECTIONAL DEEP ESTIMATION FOR ROBUST DEFORMABLE 3D MEDICAL IMAGE REGISTRATION

Next One:Knowledge-Driven Deep Unrolling for Robust Image Layer Separation