Language Models are Multilingual Chain-of-Thought Reasoners
We introduce the multilingual grade school math (mgsm) benchmark, by manually translating 250 grade-school math problems from the gsm8k dataset into ten typologically diverse languages.
We find that the ability to solve mgsm problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as bengali and swahili.
We show that the multilingual reasoning abilities of languagemodels extend to other tasks such as commonsense reasoning and word-in-contextsemantic judgment.
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei