location: Current position: Home >> Scientific Research >> Paper Publications

A Normalized Encoder-Decoder Model for Abstractive Summarization Using Focal Loss

Hits:

Indexed by:会议论文

Date of Publication:2018-01-01

Included Journals:CPCI-S

Volume:11109

Page Number:383-392

Key Words:Summarization; Seq2Seq; Pre-trained word embedding; Normalized encoder-decoder structure; Focal loss

Abstract:The summarization based on seq2seq model is a popular research topic today. And pre-trained word embedding is a common unsupervised method to improve deep learning model's performance in NLP. However, during applying this method directly to the seq2seq model, we find it does not achieve the same good result as other fields because of an over training problem. In this paper, we propose a normalized encoder-decoder structure to address it, which can prevent the semantic structure of pre-trained word embedding from being destroyed during training. Moreover, we use a novel focal loss function to help our model focus on those examples with low score for getting better performance. We conduct the experiments on NLPCC2018 share task 3: single document summary. Result showed that these two mechanisms are extremely useful, helping our model achieve state-of-the-art ROUGE scores and get the first place in this task from the current rankings.

Pre One:Overexpression of MiR482c in Tomato Induces Enhanced Susceptibility to Late Blight

Next One:Prediction of LncRNA by Using Muitiple Feature Information Fusion and Feature Selection Technique