What Went Wrong? Explaining Overall Dialogue Quality through Utterance-Level Impacts

James D. Finch, Sarah E. Finch, Jinho D. Choi


Abstract

Improving user experience of a dialogue system often requires intensive developer effort to read conversation logs, run statistical analyses, and intuit the relative importance of system shortcomings. This paper presents a novel approach to automated analysis of conversation logs that learns the relationship between user-system interactions and overall dialogue quality. Unlike prior work on utterance-level quality prediction, our approach learns the impact of each interaction from the overall user rating without utterance-level annotation, allowing resultant model conclusions to be derived on the basis of empirical evidence and at low cost. Our model identifies interactions that have a strong correlation with the overall dialogue quality in a chatbot setting. Experiments show that the automated analysis from our model agrees with expert judgments, making this work the first to show that such weakly-supervised learning of utterance-level quality prediction is highly achievable.

Venue / Year

Proceedings of the EMNLP Workshop on NLP for Conversational AI (NLP4ConvAI) / 2021

Links

Anthology | Paper | Poster | BibTeX