Generating coherent chains of thought via prompting
Chain of Thought Prompting Elicits Reasoning in Large Language Models
We explore the ability of language models to generate a coherent chain of thought, a series of short sentences that mimic the reasoning process a person might have when responding to a question.
Experiments show that inducing a chain of thought via prompting can enable sufficiently large language models to better perform reasoning tasks that otherwise have flat scaling curves.
Authors
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou