樊鑫

个人信息Personal Information

教授

博士生导师

硕士生导师

主要任职:软件学院、大连理工大学-立命馆大学国际信息与软件学院院长、党委副书记

性别:男

毕业院校:西安交通大学

学位:博士

所在单位:软件学院、国际信息与软件学院

学科:软件工程. 计算数学

电子邮箱:xin.fan@dlut.edu.cn

扫描关注

论文成果

当前位置: 樊鑫的主页 >> 科学研究 >> 论文成果

Learning Aggregated Transmission Propagation Networks for Haze Removal and Beyond

点击次数:

论文类型:期刊论文

发表时间:2019-10-01

发表刊物:IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

收录刊物:SCIE

卷号:30

期号:10

页面范围:2973-2986

ISSN号:2162-237X

关键字:Haze and rain removal; residual networks (ResNets); transmission propagation; underwater image enhancement

摘要:Single-image dehazing is an important low-level vision task with many applications. Early studies have investigated different kinds of visual priors to address this problem. However, they may fail when their assumptions are not valid on specific images. Recent deep networks also achieve a relatively good performance in this task. But unfortunately, due to the disappreciation of rich physical rules in hazes, a large amount of data are required for their training. More importantly, they may still fail when there exist completely different haze distributions in testing images. By considering the collaborations of these two perspectives, this paper designs a novel residual architecture to aggregate both prior (i.e., domain knowledge) and data (i.e., haze distribution) information to propagate transmissions for scene radiance estimation. We further present a variational energy-based perspective to investigate the intrinsic propagation behavior of our aggregated deep model. In this way, we actually bridge the gap between prior-driven models and data-driven networks and leverage advantages but avoid limitations of previous dehazing approaches. A lightweight learning framework is proposed to train our propagation network. Finally, by introducing a task-aware image separation formulation with a flexible optimization scheme, we extend the proposed model for more challenging vision tasks, such as underwater image enhancement and single-image rain removal. Experiments on both synthetic and real-world images demonstrate the effectiveness and efficiency of the proposed framework.