Bằng Công Vũ
YOU?
Author Swipe
View article: Empart: Interactive Convex Decomposition for Converting Meshes to Parts
Empart: Interactive Convex Decomposition for Converting Meshes to Parts Open
Simplifying complex 3D meshes is a crucial step in robotics applications to enable efficient motion planning and physics simulation. Common methods, such as approximate convex decomposition, represent a mesh as a collection of simple parts…
View article: A stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints
A stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints Open
The Augmented Lagrangian Method (ALM) is one of the most common approaches for solving linear and nonlinear constrained problems. However, for non-convex objectives, handling non-linear inequality constraints remains challenging. In this p…
View article: Stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints
Stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints Open
The Augmented Lagrangian Method (ALM) is one of the most common approaches for solving linear and nonlinear constrained problems. However, for nonconvex objectives, the handling of nonlinear inequality constraints remains challenging. In t…
View article: Implicit regularization with strongly convex bias: Stability and acceleration
Implicit regularization with strongly convex bias: Stability and acceleration Open
Implicit regularization refers to the property of optimization algorithms to be biased towards a certain class of solutions. This property is relevant to understand the behavior of modern machine learning algorithms as well as to design ef…
View article: A Stochastic Variance Reduction Algorithm with Bregman Distances for Structured Composite Problems
A Stochastic Variance Reduction Algorithm with Bregman Distances for Structured Composite Problems Open
We develop a novel stochastic primal dual splitting method with Bregman distances for solving a structured composite problems involving infimal convolutions in non-Euclidean spaces. The sublinear convergence in expectation of the primal-du…
View article: Convergence analysis of the stochastic reflected forward-backward splitting algorithm
Convergence analysis of the stochastic reflected forward-backward splitting algorithm Open
We propose and analyze the convergence of a novel stochastic algorithm for solving monotone inclusions that are the sum of a maximal monotone operator and a monotone, Lipschitzian operator. The propose algorithm requires only unbiased esti…
View article: A reflected forward-backward splitting method for monotone inclusions\n involving Lipschitzian operators
A reflected forward-backward splitting method for monotone inclusions\n involving Lipschitzian operators Open
The proximal extrapolated gradient method \\cite{Malitsky18a} is an extension\nof the projected reflected gradient method \\cite{Malitsky15}. Both methods were\nproposed for solving the classic variational inequalities. In this paper, we\n…
View article: Inertial Three-Operator Splitting Method and Applications
Inertial Three-Operator Splitting Method and Applications Open
We introduce an inertial variant of the forward-Douglas-Rachford splitting and analyze its convergence. We specify an instance of the proposed method to the three-composite convex minimization template. We provide practical guidance on the…
View article: On the linear convergence of the projected stochastic gradient method with constant step-size
On the linear convergence of the projected stochastic gradient method with constant step-size Open
The strong growth condition (SGC) is known to be a sufficient condition for linear convergence of the projected stochastic gradient method using a constant step-size $\gamma$ (PSGM-CS). In this paper, we prove that SGC is also a necessary …
View article: On the linear convergence of the stochastic gradient method with\n constant step-size
On the linear convergence of the stochastic gradient method with\n constant step-size Open
The strong growth condition (SGC) is known to be a sufficient condition for\nlinear convergence of the stochastic gradient method using a constant step-size\n$\\gamma$ (SGM-CS). In this paper, we provide a necessary condition, for the\nlin…
View article: On the linear convergence of the stochastic gradient method with constant step-size
On the linear convergence of the stochastic gradient method with constant step-size Open
The strong growth condition (SGC) is known to be a sufficient condition for linear convergence of the stochastic gradient method using a constant step-size $γ$ (SGM-CS). In this paper, we provide a necessary condition, for the linear conve…
View article: A First-Order Stochastic Primal-Dual Algorithm with Correction Step
A First-Order Stochastic Primal-Dual Algorithm with Correction Step Open
We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the s…
View article: Stochastic Three-Composite Convex Minimization
Stochastic Three-Composite Convex Minimization Open
We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting wher…