Recent developments in large-scale machine learning suggest that by scaling
up data, model size and training time properly, one might observe that
improvements in pre-training would transfer favorably to most downstream tasks.
In this work, we systematically study this phenomena and establish that, as we
increase the upstream accuracy, the performance of downstream tasks saturates.
In particular, we investigate more than 4800 experiments on Vision
Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten
million to ten billion, trained on the largest scale of available image data
(JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition
tasks. We propose a model for downstream performance that reflects the
saturation phenomena and captures the nonlinear relationship in performance of
upstream and downstream tasks. Delving deeper to understand the reasons that
give rise to these phenomena, we show that the saturation behavior we observe
is closely related to the way that representations evolve through the layers of
the models. We showcase an even more extreme scenario where performance on
upstream and downstream are at odds with each other. That is, to have a better
downstream performance, we need to hurt upstream accuracy.