High-Performance Large-Scale Image Recognition Without Normalization
Batch normalization is a key component of most image classification models,
but it has many undesirable properties stemming from its dependence on the
batch size and interactions between examples. Although recent work has
succeeded in training deep ResNets without normalization layers, these models
do not match the test accuracies of the best batch-normalized networks, and are
often unstable for large learning rates or strong data augmentations. In this
work, we develop an adaptive gradient clipping technique which overcomes these
instabilities, and design a significantly improved class of Normalizer-Free
ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on
ImageNet while being up to 8.7x faster to train, and our largest models attain
a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free
models attain significantly better performance than their batch-normalized
counterparts when finetuning on ImageNet after large-scale pre-training on a
dataset of 300 million labeled images, with our best models obtaining an
accuracy of 89.2%. Our code is available at this https URL
deepmind-research/tree/master/nfnets
Authors
Andrew Brock, Soham De, Samuel L. Smith, Karen Simonyan