李丽双

个人信息Personal Information

教授

博士生导师

硕士生导师

性别:女

毕业院校:大连理工大学

学位:博士

所在单位:计算机科学与技术学院

学科:计算机应用技术. 计算机软件与理论

办公地点:创新大厦A930

电子邮箱:lils@dlut.edu.cn

扫描关注

论文成果

当前位置: 中文主页 >> 科学研究 >> 论文成果

Associative attention networks for temporal relation extraction from electronic health records

点击次数:

论文类型:期刊论文

发表时间:2019-11-01

发表刊物:Journal of biomedical informatics

收录刊物:EI、PubMed

卷号:99

页面范围:103309

ISSN号:1532-0480

关键字:Attention mechanism,Electronic health records,Natural language processing,Position weights,Temporal relation extraction

摘要:Temporal relations are crucial in constructing a timeline over the course of clinical care, which can help medical practitioners and researchers track the progression of diseases, treatments and adverse reactions over time. Due to the rapid adoption of Electronic Health Records (EHRs) and high cost of manual curation, using Natural Language Processing (NLP) to extract temporal relations automatically has become a promising approach. Typically temporal relation extraction is formulated as a classification problem for the instances of entity pairs, which relies on the information hidden in context. However, EHRs contain an overwhelming amount of entities and a large number of entity pairs gathering in the same context, making it difficult to distinguish instances and identify relevant contextual information for a specific entity pair. All these pose significant challenges towards temporal relation extraction while existing methods rarely pay attention to. In this work, we propose the associative attention networks to address these issues. Each instance is first carved into three segments according to the entity pair to obtain the differentiated representation initially. Then we devise the associative attention mechanism for a further distinction by emphasizing the relevant information, and meanwhile, for the reconstruction of association among segments as the final representation of the whole instance. In addition, position weights are utilized to enhance the performance. We validate the merit of our method on the widely used THYME corpus and achieve an average F1-score of 64.3% over three runs, which outperforms the state-of-the-art by 1.5%.Copyright © 2019 Elsevier Inc. All rights reserved.