Hits:
Indexed by:期刊论文
Date of Publication:2018-03-01
Journal:Journal of Information Hiding and Multimedia Signal Processing
Included Journals:EI
Volume:9
Issue:2
Page Number:461-473
ISSN No.:20734212
Abstract:Two kinds of relations exist between words, syntagmatic and paradigmatic. Word embedding as a state-of-the-art model of distributional semantics has been used to discover the paradigmatic relations between words and has been widely used in natural language processing tasks. Based on a hypothesis that at sentence level, except for words in paradigmatic relations, two words in certain syntagmatic relation are more similar than those not in any syntagmatic relations, we propose to discover words in syntagmatic relations in a sentence using word embedding based similarity computation. The experiments prove that word embedding based similarity between words in syntagmatic relations is higher than that between words not in any syntagmatic relations. And word embedding based method is competitive to the best measures in literature and can be a good complement to those measures. This discover can be conducive to many syntagmatic related natural language processing tasks such as parsing, text generation, machine translation, collocation extraction and multi-word expression recognition. Further experiment in collocation extraction shows that the proposed word embedding based association measure is effective in filtering the noisy collocation candidates at sentence level and it outperforms the existing well-known association measures in all precision, recall and F-measure. © 2018, Ubiquitous International. All rights reserved.