• 更多栏目

    赵哲焕

    • 副教授       硕士生导师
    • 性别:男
    • 毕业院校:大连理工大学
    • 学位:博士
    • 所在单位:软件学院、国际信息与软件学院
    • 学科:软件工程
    • 办公地点:大连理工大学,开发区校区,综合楼317
    • 电子邮箱:z.zhao@dlut.edu.cn

    访问量:

    开通时间:..

    最后更新时间:..

    Deep transfer learning for modality classification of medical images

    点击次数:

    论文类型:期刊论文

    发表时间:2017-07-29

    发表刊物:Information (Switzerland)

    收录刊物:Scopus、EI

    卷号:8

    期号:3

    ISSN号:20782489

    摘要:Medical images are valuable for clinical diagnosis and decision making. Image modality is an important primary step, as it is capable of aiding clinicians to access required medical image in retrieval systems. Traditional methods of modality classification are dependent on the choice of hand-crafted features and demand a clear awareness of prior domain knowledge. The feature learning approach may detect efficiently visual characteristics of different modalities, but it is limited to the number of training datasets. To overcome the absence of labeled data, on the one hand, we take deep convolutional neural networks (VGGNet, ResNet) with different depths pre-trained on ImageNet, fix most of the earlier layers to reserve generic features of natural images, and only train their higher-level portion on ImageCLEF to learn domain-specific features of medical figures. Then, we train from scratch deep CNNs with only six weight layers to capture more domain-specific features. On the other hand, we employ two data augmentation methods to help CNNs to give the full scope to their potential characterizing image modality features. The final prediction is given by our voting system based on the outputs of three CNNs. After evaluating our proposed model on the subfigure classification task in ImageCLEF2015 and ImageCLEF2016, we obtain new, state-of-the-art results-76.87% in ImageCLEF2015 and 87.37% in ImageCLEF2016-which imply that CNNs, based on our proposed transfer learning methods and data augmentation skills, can identify more efficiently modalities of medical images. ? 2017 by the authors.