Be the first to like this
Evaluating open-domain dialogue systems is difficult due to the diversity of possible correct answers. Automatic metrics such as BLEU correlate weakly with human annotations, resulting in a significant bias across different models and datasets. Some researchers resort to human judgment experimentation for assessing response quality, which is expensive, time consuming, and not scalable. Moreover, judges tend to evaluate a small number of dialogues, meaning that minor differences in evaluation configuration may lead to dissimilar results. In this talk, I will present interpretable metrics for evaluating topic coherence by making use of distributed sentence representations. Furthermore, I will introduce calculable approximations of human judgment based on conversational coherence by adopting state-of-the-art entailment techniques. I will show that the introduced metrics can be used as a surrogate for human judgment, making it easy to evaluate dialogue systems on large-scale datasets and allowing an unbiased estimate for the quality of the responses.
WHAT YOU'LL LEARN
The task of evaluating dialogue systems is far from being solved, researchers are still on the quest for a strong and reliable metric that highly conforms with human judgment.
Consistency is key in evaluating dialog systems
Entailment techniques lay the foundations of future works to evaluate better the consistency in dialogues
Deep learning and reinforcement enable new research
Nouha Dziri is a Ph.D. student at the University of Alberta working within the Alberta Machine Intelligence Institute. Her research interests revolves around generative deep learning models and conversational dialogue systems. In particular, her work focuses on modelling an intelligent agent which can have open-ended conversations indistinguishable from human ones. Before her Ph.D, she completed a MSc degree in Computer Science at the University of Alberta where she worked on dialogue modeling and quality evaluation. She has interned at Google AI in New York city where she investigated dialogue quality modeling and persuasiveness.