扫描手机二维码

欢迎您的访问
您是第 位访客

开通时间:..

最后更新时间:..

  • 雷振坤 ( 教授 )

    的个人主页 http://faculty.dlut.edu.cn/Leizk/zh_CN/index.htm

  •   教授   博士生导师   硕士生导师
论文成果 当前位置: 中文主页 >> 科学研究 >> 论文成果
Texture preservation and speckle reduction in poor optical coherence tomography using the convolutional neural network

点击次数:
论文类型:期刊论文
发表时间:2021-02-25
发表刊物:MEDICAL IMAGE ANALYSIS
卷号:64
ISSN号:1361-8415
关键字:Optical coherence tomography; Speckle; Image processing; Convolution neural network
摘要:For a poor quality optical coherence tomography (OCT) image, quality enhancement is limited to speckle residue and edge blur as well as texture loss, especially at the background region near edges. To solve this problem, in this paper we propose a de-speckling method based on the convolutional neural network (CNN). In the proposed method, we use a deep nonlinear CNN mapping model in the serial architecture, here named as OCTNet. Our OCTNet in the proposed method can fully utilize the deep information on speckles and edges as well as fine textures of an original OCT image. And also we construct an available pertinent dataset by combining three existing methods to train the model. With the proposed method, we can accurately get the speckle noise from an original OCT image. We test our method on four experimental human retinal OCT images and also compare it with three state-of-the-art methods, including the adaptive complex diffusion (ACD) method and the curvelet shrinkage (Curvelet) method as well as the shearlet-based total variation (STV) method. The performance of these methods is quantitatively evaluated in terms of image distinguishability, contrast, smoothness and edge sharpness, and also qualitatively analyzed at aspects of speckle reduction, texture protection and edge preservation. The experimental results show that our OCTNet can reduce the speckle noise and protect the structural information as well as preserve the edge features effectively and simultaneously, even where the background region near edges. And also our OCTNet has full advantages on excellent generalization, adaptiveness, robust and batch performance. These advantages make our method be suitable to process a great mass of different images rapidly without any parameter fine-turning under a time-constrained real-time situation. (C) 2020 Elsevier B.V. All rights reserved.

 

辽ICP备05001357号 地址:中国·辽宁省大连市甘井子区凌工路2号 邮编:116024
版权所有:大连理工大学