Convex regularization in statistical inverse learning problems Article Swipe
Related Concepts
Regularization (linguistics)
Tikhonov regularization
Bregman divergence
Inverse problem
Mathematics
Convex function
Applied mathematics
Regular polygon
Operator (biology)
Convex optimization
Mathematical optimization
Mathematical analysis
Computer science
Artificial intelligence
Transcription factor
Repressor
Biochemistry
Geometry
Gene
Chemistry
Tatiana A. Bubba
,
Martin Burger
,
Tapio Helin
,
Luca Ratti
·
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.3934/ipi.2023013
· OA: W3130603238
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.3934/ipi.2023013
· OA: W3130603238
We consider a statistical inverse learning problem, where the task is to estimate a function f based on noisy point evaluations of Af, where A is a linear operator. The function Af is evaluated at i.i.d. random design points u(n), n = 1, ..., N generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and p-homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman distance induced by the penalty functional. We derive concrete rates for Besov norm penalties and numerically demonstrate the correspondence with the observed rates in the context of X-ray tomography.
Related Topics
Finding more related topics…