王洪凯

个人信息Personal Information

教授

博士生导师

硕士生导师

主要任职:医学部副部长

性别:男

毕业院校:清华大学

学位:博士

所在单位:医学部

学科:生物医学工程

联系方式:wang.hongkai@dlut.edu.cn

电子邮箱:wang.hongkai@dlut.edu.cn

扫描关注

论文成果

当前位置: 王洪凯个人主页 >> 科学研究 >> 论文成果

Inter-Subject Shape Correspondence Computation From Medical Images Without Organ Segmentation

点击次数:

论文类型:期刊论文

发表时间:2019-01-01

发表刊物:IEEE ACCESS

收录刊物:SCIE

卷号:7

页面范围:130772-130781

ISSN号:2169-3536

关键字:Statistical shape model; shape modeling; shape correspondences; anatomical landmark; combined-intensity-and-landmark registration

摘要:Statistical shape models (SSMs) have been established as robust anatomical priors for medical image segmentation, registration and anatomy modelling. To construct an SSM which accurately models the inter-subject anatomical variations, it is crucial to compute accurate shape correspondence between the training samples. To achieve this goal, the state-of-the-art shape correspondence computation methods always require tedious segmentation of the training images, while they seldom pay enough attention to the correspondence accuracy of key anatomical landmarks like the bone joints, vessel bifurcations, etc. As a result, the computation of shape correspondence is time-consuming and the correspondence accuracy is imperfect. To solve these problems, this paper proposes a novel shape correspondence computation approach which eliminates the need for image segmentation by registering an organ shape template to the training images. This method allows the human expert to specify key anatomical landmarks in the training images to define the correspondence of the crucial landmarks. An intensity-and-landmark-combined strategy is implemented to utilized both the image intensity and expert landmarks to obtain accurate shape correspondence. This method is evaluated for the construction of head anatomy SSM and spine SSM based on computed tomography (CT) images. The SSMs constructed using the proposed method demonstrates better shape correspondence accuracy than other state-of-the-arts correspondence methods. In particular, this method obtains pixel-level surface correspondence accuracy (1.38 mm) for the skull and sub-pixel level accuracy (0.92 mm) for the spine. The generalisability and specificity of the SSMs constructed using our method are also superior to SSMs constructed using other compared correspondence methods. With this method, we propose a novel approach which takes less human intervention and produces higher quality SSM with better shape modelling accuracy.