MPC

Mathematical Programming Computation, Volume 13, Issue 3, September 2021

An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization

Fei Li, Zheng Qu

We propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing problem dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves O(1/ϵ√) and O(1/ϵ) complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to O~(1/ϵ√) and O~(1/ϵ), which hold both for obtaining ϵ-optimal and ϵ-KKT solution. Within the general framework that we propose, we also obtain O~(1/ϵ) and O~(1/ϵ2) complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale problems.

Full Text: PDF




Imprint and privacy statement

For the imprint and privacy statement we refer to the Imprint of ZIB.
© 2008-2024 by Zuse Institute Berlin (ZIB).