Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of MT quality
Devising metrics to assess translation quality has always been at the core of
machine translation (MT) research. Traditional automatic reference-based
metrics, such as BLEU, have shown correlations with human judgements of
adequacy and fluency and have been paramount for the advancement of MT system
development. Crowd-sourcing has popularised and enabled the scalability of
metrics based on human judgements, such as subjective direct assessments (DA)
of adequacy, that are believed to be more reliable than reference-based
automatic metrics. Finally, task-based measurements, such as post-editing time,
are expected to provide a more detailed evaluation of the usefulness of
translations for a specific task. Therefore, while DA averages adequacy
judgements to obtain an appraisal of (perceived) quality independently of the
task, and reference-based automatic metrics try to objectively estimate quality
also in a task-independent way, task-based metrics are measurements obtained
either during or after performing a specific task. In this paper we argue that,
although expensive, task-based measurements are the most reliable when
estimating MT quality in a specific task; in our case, this task is
post-editing. To that end, we report experiments on a dataset with
newly-collected post-editing indicators and show their usefulness when
estimating post-editing effort. Our results show that task-based metrics
comparing machine-translated and post-edited versions are the best at tracking
post-editing effort, as expected. These metrics are followed by DA, and then by
metrics comparing the machine-translated version and independent references. We
suggest that MT practitioners should be aware of these differences and
acknowledge their implications when deciding how to evaluate MT for
post-editing purposes.