Chain of thought prompting successfully improves the reasoning capabilities
of large language models, achieving state of the art results on a range of
datasets. However, these reasoning capabilities only appear to emerge in models
with a size of over 100 billion parameters. In this paper, we explore the
transfer of such reasoning capabilities to models with less than 100 billion
parameters via knowledge distillation. Specifically, we finetune a student
model on the chain of thought outputs generated by a larger teacher model. Our
experiments show that the proposed method improves task performance across
arithmetic, commonsense and symbolic reasoning datasets. For example, the
accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% when finetuned on
PaLM-540B generated chains of thought.
Authors
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn