的个人主页 http://faculty.dlut.edu.cn/yexinchen/zh_CN/index.htm
Unsupervised detail-preserving network for high quality monocular
depth estimation
Xinchen Ye1*, Mingliang Zhang1, Xin Fan1
1 Dalian University of Technology
* Corresponding author
Introduction
Monocular depth estimation is a challenging task and has many important applications including scene understanding and reconstruction, autonomous navigation and augmented reality. In the last few years, deep learning has achieved great success in predicting the depth map from a single-view color image. Early works mainly focus on supervised learning. It is generally known that ground truth annotations are usually sparse or not easy to be captured by depth-sensing equipment. To handle this issue, recent
unsupervised methods refer to depth estimation as a image reconstruction problem, where view synthesis is an effective supervised signal to train the network. Therefore, we also adopt this unsupervised technique in the proposed framework. We propose an unsupervised detail-preserving framework for monocular depth estimation to address two problems, i.e., inaccurate inference of depth details and loss of spatial information.
Index Terms— Unsupervised network, Monocular, Depth estimation, Detail-preserving
Method
Publications
[1] Mingliang Zhang; Xinchen Ye*; Xin Fan; Wei Zhong; Unsupervised Depth Estimation from Monocular Videos with Hybrid Geometric-refined Loss and Contextual Attention, Neurocomputing, 379: 250-261, 2020.
[2] Mingliang Zhang; Xinchen Ye*; Xin Fan; Unsupervised Detail-Preserving Network for High Quality Monocular Depth Estimation, Neurocomputing, 404:1-13, 2020.
[3] Xinchen Ye*, Mingliang Zhang, Xin Fan, Rui Xu, Juncheng Pu, Ruoke Yan, Cascaded Detail-Aware Network for Unsupervised Monocular Depth Estimation, ICME 2020, London, UK. (CCF-B)
[4] Xinchen Ye*, Mingliang Zhang, Rui Xu, Wei Zhong, Xin Fan, Unsupervised Monocular Depth Estimation based on Dual Attention Mechanism and Depth-Aware Loss. IEEE International Conference on Multimedia and Expo, ICME 2019, Shanghai, China. (CCF-B)