Self-consistency: Improving the Accuracy of Large Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
We explore a simple ensemble strategy, self-consistency, that significantly improves the reasoning accuracy of large language models.
The idea is to sample a diverse set of outputs from a language model and return the most consistent answer in the set.
Such ensembling method improves reasoning accuracy when combined with chain of thought prompting.
For arithmetic and commonsense reasoning benchmarks we find that self-consistency yields significant accuracyimprovements in a variety of datasets, such as gsm8k, svamp, multiarith, commonsenseqa and arc.
Authors
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou