Modern deep neural networks (DNNs) have achieved state-of-the-art
performances but are typically over-parameterized. The over-parameterization
may result in undesirably large generalization error in the absence of other
customized training strategies. Recently, a line of research under the name of
Sharpness-Aware Minimization (SAM) has shown that minimizing a sharpness
measure, which reflects the geometry of the loss landscape, can significantly
reduce the generalization error. However, SAM-like methods incur a two-fold
computational overhead of the given base optimizer (e.g. SGD) for approximating
the sharpness measure. In this paper, we propose Sharpness-Aware Training for
Free, or SAF, which mitigates the sharp landscape at almost zero additional
computational cost over the base optimizer. Intuitively, SAF achieves this by
avoiding sudden drops in the loss in the sharp local minima throughout the
trajectory of the updates of the weights. Specifically, we suggest a novel
trajectory loss, based on the KL-divergence between the outputs of DNNs with
the current weights and past weights, as a replacement of the SAM's sharpness
measure. This loss captures the rate of change of the training loss along the
model's update trajectory. By minimizing it, SAF ensures the convergence to a
flat minimum with improved generalization capabilities. Extensive empirical
results show that SAF minimizes the sharpness in the same way that SAM does,
yielding better results on the ImageNet dataset with essentially the same
computational cost as the base optimizer.
Authors
Jiawei Du, Daquan Zhou, Jiashi Feng, Vincent Y. F. Tan, Joey Tianyi Zhou