A first-order primal-dual method with adaptivity to local smoothness
We consider the problem of finding a saddle point for the convex-concave
objective $\min_x \max_y f(x) + \langle Ax, y\rangle - g^*(y)$, where $f$ is a
convex function with locally Lipschitz gradient and $g$ is convex and possibly
non-smooth. We propose an adaptive version of the Condat-V\~u algorithm, which
alternates between primal gradient steps and dual proximal steps. The method
achieves stepsize adaptivity through a simple rule involving $\|A\|$ and the
norm of recently computed gradients of $f$. Under standard assumptions, we
prove an $\mathcal{O}(k^{-1})$ ergodic convergence rate. Furthermore, when $f$
is also locally strongly convex and $A$ has full row rank we show that our
method converges with a linear rate. Numerical experiments are provided for
illustrating the practical performance of the algorithm.
Authors
Maria-Luiza Vladarean, Yura Malitsky, Volkan Cevher