DFEE: Interactive DataFlow Execution and Evaluation Kit

Han He, Song Feng, Daniele Bonadiman, Yi Zhang, Saab Mansour


Abstract

DataFlow has been emerging as a new paradigm for building task-oriented chatbots due to its expressive semantic representations of the dialogue tasks. Despite the availability of a large dataset SMCalFlow and a simplified syntax (Meron 2022), the development and evaluation of DataFlow-based chatbots remain challenging due to the system complexity and the lack of downstream toolchains. In this demonstration, we present DFEE, an interactive DataFlow Execution and Evaluation toolkit that supports execution, visualization and benchmarking of semantic parsers given dialogue input and backend database. We demonstrate the system via a complex dialog task: event scheduling that involves temporal reasoning. It also supports diagnosing the parsing results via a friendly interface that allows developers to examine dynamic DataFlow and the corresponding execution results. To illustrate how to benchmark SoTA models, we propose a novel benchmark that covers more sophisticated event scheduling scenarios and a new metric on task success evaluation.

Venue / Year

Proceedings of the AAAI Conference on Artificial Intelligence (AAAI): Demonstration Program / 2023

Links

Anthology | Paper | Presentation | BibTeX