![]() |
个人信息Personal Information
教授
硕士生导师
性别:男
毕业院校:大连理工大学
学位:博士
所在单位:创新创业学院
办公地点:创新创业学院402室
联系方式:041184707111
电子邮箱:fenglin@dlut.edu.cn
Semantic Discriminative Metric Learning for Image Similarity Measurement
点击次数:
论文类型:期刊论文
发表时间:2016-08-01
发表刊物:IEEE TRANSACTIONS ON MULTIMEDIA
收录刊物:SCIE、EI、Scopus
卷号:18
期号:8
页面范围:1579-1589
ISSN号:1520-9210
关键字:Divergence balance; geometric mean; image similarity measurement; metric learning; semantic discriminative metric learning (SDML); semantic information
摘要:Along with the arrival of multimedia time, multimedia data has replaced textual data to transfer information in various fields. As an important form of multimedia data, images have been widely utilized by many applications, such as face recognition and image classification. Therefore, how to accurately annotate each image from a large set of images is of vital importance but challenging. To perform these tasks well, it is crucial to extract suitable features to character the visual contents of images and learn an appropriate distance metric to measure similarities between all images. Unfortunately, existing feature operators, such as histogram of gradient, local binary pattern, and color histogram, care more about the visual character of images and lack the ability to distinguish semantic information. Similarities between those features cannot reflect the real category correlations due to the well-known semantic gap. In order to solve this problem, this paper proposes a regularized distance metric framework called semantic discriminative metric learning (SDML). SDML combines geometric mean with normalized divergences and separates images from different classes simultaneously. The learned distance metric can treat all images from different classes equally. And distinctions between similar classes with entirely different semantic contents are emphasized by SDML. This procedure ensures the consistency between dissimilarities and semantic distinctions and avoids inaccuracy similarities incurred by unbalanced locations of samples. Various experiments on benchmark image datasets show the excellent performance of the novel method.