The World is Not Binary: Learning to Rank with Grayscale Data for Dialogue Response Selection
Response selection plays a vital role in building retrieval-based
conversation systems. Despite that response selection is naturally a
learning-to-rank problem, most prior works take a point-wise view and train
binary classifiers for this task: each response candidate is labeled either
relevant (one) or irrelevant (zero). On the one hand, this formalization can be
sub-optimal due to its ignorance of the diversity of response quality. On the
other hand, annotating grayscale data for learning-to-rank can be prohibitively
expensive and challenging. In this work, we show that grayscale data can be
automatically constructed without human effort. Our method employs
off-the-shelf response retrieval models and response generation models as
automatic grayscale data generators. With the constructed grayscale data, we
propose multi-level ranking objectives for training, which can (1) teach a
matching model to capture more fine-grained context-response relevance
difference and (2) reduce the train-test discrepancy in terms of distractor
strength. Our method is simple, effective, and universal. Experiments on three
benchmark datasets and four state-of-the-art matching models show that the
proposed approach brings significant and consistent performance improvements.
Authors
Zibo Lin, Deng Cai, Yan Wang, Xiaojiang Liu, Hai-Tao Zheng, Shuming Shi