论文类型:会议论文
收录刊物:CPCI-S、EI
卷号:669
页面范围:143-150
关键字:Natural language processing; Maximum margin; Word representation; Small
background texts
摘要:Vector representations of words learned from large scale background texts can be used as useful features in natural language processing and machine learning applications. Word representations in previous works were often trained on large-scale unlabeled texts. However, in some scenarios, large scale background texts are not available. Therefore, in this paper, we propose a novel word representation model based on maximum-margin to train word representation using small set of background texts. Experimental results show many advantages of our method.
