Adaptation of Multilingual Transformer Encoder for Robust Enhanced Universal Dependency Parsing

Han He, Jinho D. Choi

3rd-Place Winner


Abstract

We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.

Venue / Year

Proceedings of the International Conference on Parsing Technologies: Shared Task on Enhanced Universal Dependencies (IWPT:ST) / 2020

Links

Anthology | Paper | Presentation | BibTeX | GitHub