We investigate the optimal model size and number of tokens for training a
transformer language model under a given compute budget. We find that current
large language models are significantly undertrained, a consequence of the
recent focus on scaling language models whilst keeping the amount of training
data constant. By training over \nummodels language models ranging from 70
million to over 16 billion parameters on 5 to 500 billion tokens, we find that
for compute-optimal training, the model size and the number of training tokens
should be scaled equally: for every doubling of model size the number of
training tokens should also be doubled. We test this hypothesis by training a
predicted compute-optimal model, \chinchilla, that uses the same compute budget
as \gopher but with 70B parameters and 4$\times$ more more data. \chinchilla
uniformly and significantly outperforms \Gopher (280B), GPT-3 (175B),
Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of
downstream evaluation tasks. This also means that \chinchilla uses
substantially less compute for fine-tuning and inference, greatly facilitating
downstream usage. As a highlight, \chinchilla reaches a state-of-the-art
average accuracy of 67.5\% on the MMLU benchmark, greater than a 7\%
improvement over \gopher.
Authors
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae