MEMO: Test Time Robustness via Adaptation and Augmentation
While deep neural networks can attain good accuracy on in-distribution test
points, many applications require robustness even in the face of unexpected
perturbations in the input, changes in the domain, or other sources of
distribution shift. We study the problem of test time robustification, i.e.,
using the test input to improve model robustness. Recent prior works have
proposed methods for test time adaptation, however, they each introduce
additional assumptions, such as access to multiple test points, that prevent
widespread adoption. In this work, we aim to study and devise methods that make
no assumptions about the model training process and are broadly applicable at
test time. We propose a simple approach that can be used in any test setting
where the model is probabilistic and adaptable: when presented with a test
example, perform different data augmentations on the data point, and then adapt
(all of) the model parameters by minimizing the entropy of the model's average,
or marginal, output distribution across the augmentations. Intuitively, this
objective encourages the model to make the same prediction across different
augmentations, thus enforcing the invariances encoded in these augmentations,
while also maintaining confidence in its predictions. In our experiments, we
demonstrate that this approach consistently improves robust ResNet and vision
transformer models, achieving accuracy gains of 1-8% over standard model
evaluation and also generally outperforming prior augmentation and adaptation
strategies. We achieve state-of-the-art results for test shifts caused by image
corruptions (ImageNet-C), renditions of common objects (ImageNet-R), and, among
ResNet-50 models, adversarially chosen natural examples (ImageNet-A).