CLseg: Contrastive Learning of Story Ending Generation
Author
Abstract

Natural Language Processing - Story Ending Generation (SEG) is a challenging task in natural language generation. Recently, methods based on Pretrained Language Models (PLM) have achieved great prosperity, which can produce fluent and coherent story endings. However, the pre-training objective of PLM-based methods is unable to model the consistency between story context and ending. The goal of this paper is to adopt contrastive learning to generate endings more consistent with story context, while there are two main challenges in contrastive learning of SEG. First is the negative sampling of wrong endings inconsistent with story contexts. The second challenge is the adaptation of contrastive learning for SEG. To address these two issues, we propose a novel Contrastive Learning framework for Story Ending Generation (CLSEG)†, which has two steps: multi-aspect sampling and story-specific contrastive learning. Particularly, for the first issue, we utilize novel multi-aspect sampling mechanisms to obtain wrong endings considering the consistency of order, causality, and sentiment. To solve the second issue, we well-design a story-specific contrastive training strategy that is adapted for SEG. Experiments show that CLSEG outperforms baselines and can produce story endings with stronger consistency and rationality.

Year of Publication
2022
Date Published
may
Publisher
IEEE
Conference Location
Singapore, Singapore
ISBN Number
978-1-66540-540-9
URL
https://ieeexplore.ieee.org/document/9747435/
DOI
10.1109/ICASSP43922.2022.9747435
Google Scholar | BibTeX | DOI