Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media

Xiangjue Dong, Changmao Li, Jinho D. Choi

2nd-Place Winner


Abstract

We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.

Venue / Year

Proceedings of the ACL Workshop on Figurative Language Processing: Shared Task on Sarcasm Detection (FigLang:ST) / 2020

Links

Anthology | Paper | Presentation | BibTeX