Date: 2021-10-15 / 3:00 ~ 4:00 PM
Although dialogue models have made significant progress over the last few years, they are still vulnerable to producing erroneous outputs for a variety of reasons. In particular, recent works have emerged that suggest commonsense capabilities do not naturally arise from dialogue model training. As a possible remedy to this, effort has been made to include commonsense knowledge as an external information source to dialogue models. These current approaches take a simplified view of commonsense knowledge, utilizing only one of many available commonsense resources and often failing to assess the impact of their proposed models on enabling the missing commonsense capabilities instead focusing on dialogue response generation as a whole. Thus, it remains a question how successful such methods are towards solving the underlying issue. As one step towards answering this question, we aim to investigate the distribution of various types of commonsense knowledge and reasoning within dialogue responses, and assess their relative impacts on improving both dialogue model performance and commonsense reasoning capabilities.