Please use this identifier to cite or link to this item:
https://ruomo.lib.uom.gr/handle/7000/878
Title: | Encoding Position Improves Recurrent Neural Text Summarizers |
Authors: | Karanikolos, Apostolos Refanidis, Ioannis |
Editors: | Abbas, Mourad Freihat, Abed Alhakim |
Type: | Conference Paper |
Subjects: | FRASCATI::Natural sciences::Computer and information sciences |
Keywords: | natural language processing abstractive text summarization neural sequence to sequence models positional embeddings |
Issue Date: | Sep-2019 |
Publisher: | The Association for Computational Linguistics |
First Page: | 143 |
Last Page: | 150 |
Volume Title: | Proceedings of the 3rd International Conference on Natural Language and Speech Processing (ICNLSP 2019) |
Abstract: | Modern text summarizers are big neural networks (recurrent, convolutional, or transformers) trained end-to-end under an encoder-decoder framework. These networks equipped with an attention mechanism, that maintains a memory of their source hidden states, are able to generalize well to long text sequences. In this paper, we explore how the different modules involved in an encoder-decoder structure affect the produced summary quality as measured by ROUGE score in the widely used CNN/Daily Mail and Gigaword summarization datasets. We find that encoding the position of the text tokens before feeding them to a recurrent text summarizer gives a significant, in terms of ROUGE, gain to its performance on the former but not the latter dataset. |
URI: | https://www.aclweb.org/anthology/W19-74.pdf https://ruomo.lib.uom.gr/handle/7000/878 |
Electronic ISBN: | 978-1-950737-62-8 |
Appears in Collections: | Department of Applied Informatics |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
paper_8.pdf | Preprint | 889,59 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.