On the convergence of a linesearch based proximal-gradient method for nonconvex optimization Article Swipe
Related Concepts
Mathematics
Proximal Gradient Methods
Convergence (economics)
Property (philosophy)
Domain (mathematical analysis)
Limit point
Metric (unit)
Mathematical optimization
Minification
Function (biology)
Convex function
Point (geometry)
Rate of convergence
Optimization problem
Regular polygon
Limit (mathematics)
Applied mathematics
Algorithm
Computer science
Mathematical analysis
Key (lock)
Operations management
Economic growth
Geometry
Biology
Epistemology
Philosophy
Computer security
Evolutionary biology
Economics
Silvia Bonettini
,
Ignace Loris
,
Federica Porta
,
Marco Prato
,
Simone Rebegoldi
·
YOU?
·
· 2017
· Open Access
·
· DOI: https://doi.org/10.1088/1361-6420/aa5bfd
· OA: W2392724003
YOU?
·
· 2017
· Open Access
·
· DOI: https://doi.org/10.1088/1361-6420/aa5bfd
· OA: W2392724003
We consider a variable metric line-search based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a minimum point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive if compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Related Topics
Finding more related topics…