On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications Article Swipe
YOU?
·
· 2016
· Open Access
·
· DOI: https://doi.org/10.1137/140993090
· OA: W2398068802
In the present paper, we investigate a linearized proximal algorithm (LPA) for solving a convex composite optimization problem. Each iteration of the LPA is a proximal minimization of the convex composite function with the inner function being linearized at the current iterate. The LPA has the attractive computational advantage that the solution of each subproblem is a singleton, which avoids the difficulty as in the Gauss--Newton method (GNM) of finding a solution with minimum norm among the set of minima of its subproblem, while still maintaining the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order $p$ ($p \in [1,2]$) and a quasi-regularity condition, we establish a local superlinear convergence rate for the LPA. We also propose a globalization strategy for the LPA based on a backtracking line-search and an inexact version of the LPA. We further apply the LPA to solve a (possibly nonconvex) feasibility problem, as well as a sensor network localization problem. Our numerical results illustrate that the LPA meets the demand for an efficient and robust algorithm for the sensor network localization problem.