On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Stochastic gradient descent is the method of choice for large scale
optimization of machine learning objective functions. Yet, its performance is
greatly variable and heavily depends on the choice of the stepsizes. This has
motivated a large body of research on adaptive stepsizes. However, there is
currently a gap in our theoretical understanding of these methods, especially
in the non-convex setting. In this paper, we start closing this gap: we
theoretically analyze in the convex and non-convex settings a generalized
version of the AdaGrad stepsizes. We show sufficient conditions for these
stepsizes to achieve almost sure asymptotic convergence of the gradients to
zero, proving the first guarantee for generalized AdaGrad stepsizes in the
non-convex setting. Moreover, we show that these stepsizes allow to
automatically adapt to the level of noise of the stochastic gradients in both
the convex and non-convex settings, interpolating between $O(1/T)$ and
$O(1/\sqrt{T})$, up to logarithmic terms.