Online Coreference Resolution for Dialogue Processing: Improving Mention-Linking on Real-Time Conversations

Liyan Xu, Jinho D. Choi


Abstract

This paper suggests a direction of coreference resolution for online decoding on actively generated input such as dialogue, where the model accepts an utterance and its past context, then finds mentions in the current utterance as well as their referents, upon each dialogue turn. A baseline and four incremental-updated models adapted from the mention-linking paradigm are proposed for this new setting, which address different aspects including the singletons, speaker-grounded encoding and cross-turn mention contextualization. Our approach is assessed on three datasets: Friends, OntoNotes, and BOLT. Results show that each aspect brings out steady improvement, and our best models outperform the baseline by over 10%, presenting an effective system for this setting. Further analysis highlights the task characteristics, such as the significance of addressing the mention recall.

Venue / Year

Proceedings of the Joint Conference on Lexical and Computational Semantics / 2022

Links

Anthology | Paper | Presentation | BibTeX