Normalization layers and activation functions are fundamental components in
deep networks and typically co-locate with each other. Here we propose to
design them using an automated approach. Instead of designing them separately,
we unify them into a single tensor-to-tensor computation graph, and evolve its
structure starting from basic mathematical functions. Examples of such
mathematical functions are addition, multiplication and statistical moments.
The use of low-level mathematical functions, in contrast to the use of
high-level modules in mainstream NAS, leads to a highly sparse and large search
space which can be challenging for search methods. To address the challenge, we
develop efficient rejection protocols to quickly filter out candidate layers
that do not work well. We also use multi-objective evolution to optimize each
layer's performance across many architectures to prevent overfitting. Our
method leads to the discovery of EvoNorms, a set of new
normalization-activation layers with novel, and sometimes surprising structures
that go beyond existing design patterns. For example, some EvoNorms do not
assume that normalization and activation functions must be applied
sequentially, nor need to center the feature maps, nor require explicit
activation functions. Our experiments show that EvoNorms work well on image
classification models including ResNets, MobileNets and EfficientNets but also
transfer well to Mask R-CNN with FPN/SpineNet for instance segmentation and to
BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers
in many cases.
Authors
Hanxiao Liu, Andrew Brock, Karen Simonyan, Quoc V. Le