Fedor Stonyakin
YOU?
Author Swipe
View article: A Fully Adaptive Frank-Wolfe Algorithm for Relatively Smooth Problems and Its Application to Centralized Distributed Optimization
A Fully Adaptive Frank-Wolfe Algorithm for Relatively Smooth Problems and Its Application to Centralized Distributed Optimization Open
We study the Frank-Wolfe algorithm for constrained optimization problems with relatively smooth objectives. Building upon our previous work, we propose a fully adaptive variant of the Frank-Wolfe method that dynamically adjusts the step si…
View article: Mirror Descent Methods with Weighting Scheme for Outputs for Constrained Variational Inequality Problems
Mirror Descent Methods with Weighting Scheme for Outputs for Constrained Variational Inequality Problems Open
This paper is devoted to the variational inequality problems. We consider two classes of problems, the first is classical constrained variational inequality and the second is the same problem with functional (inequality type) constraints. …
View article: On quasi-convex smooth optimization problems by a comparison oracle
On quasi-convex smooth optimization problems by a comparison oracle Open
Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, m…
View article: Accelerated Bregman gradient methods for relatively smooth and relatively Lipschitz continuous minimization problems
Accelerated Bregman gradient methods for relatively smooth and relatively Lipschitz continuous minimization problems Open
In this paper, we propose some accelerated methods for solving optimization problems under the condition of relatively smooth and relatively Lipschitz continuous functions with an inexact oracle. We consider the problem of minimizing the c…
View article: Universal methods for variational inequalities: deterministic and stochastic cases
Universal methods for variational inequalities: deterministic and stochastic cases Open
In this paper, we propose universal proximal mirror methods to solve the variational inequality problem with Holder continuous operators in both deterministic and stochastic settings. The proposed methods automatically adapt not only to th…
View article: On Some Versions of Subspace Optimization Methods with Inexact Gradient Information
On Some Versions of Subspace Optimization Methods with Inexact Gradient Information Open
It is well-known that accelerated gradient first order methods possess optimal complexity estimates for the class of convex smooth minimization problems. In many practical situations, it makes sense to work with inexact gradients. However,…
View article: Higher Degree Inexact Model for Optimization problems
Higher Degree Inexact Model for Optimization problems Open
In this paper, it was proposed a new concept of the inexact higher degree $(δ, L, q)$-model of a function that is a generalization of the inexact $(δ, L)$-model, $(δ, L)$-oracle and $(δ, L)$-oracle of degree $q \in [0,2)$. Some examples we…
View article: Adaptive Variant of Frank-Wolfe Method for Relative Smooth Convex Optimization Problems
Adaptive Variant of Frank-Wolfe Method for Relative Smooth Convex Optimization Problems Open
The paper introduces a new adaptive version of the Frank-Wolfe algorithm for relatively smooth convex functions. It is proposed to use the Bregman divergence other than half the square of the Euclidean norm in the formula for step-size. Al…
View article: Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum Open
В работе рассмотрено два варианта понятия острого минимума для задач математического программирования с квазивыпуклой целевой функцией и ограничениями-неравенствами. Исследована задача описания варианта простого субградиентного метода с пе…
View article: Mirror Descent Methods with Weighting Scheme for Outputs for Optimization Problems with Functional Constraints
Mirror Descent Methods with Weighting Scheme for Outputs for Optimization Problems with Functional Constraints Open
This paper is devoted to new mirror descent-type methods with switching between two types of iteration points (productive and non-productive) for constrained convex optimization problems with several convex functional (inequality-type) con…
View article: On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle
On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle Open
Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, m…
View article: Subgradient methods with variants of Polyak step-size for quasi-convex optimization with inequality constraints for analogues of sharp minima
Subgradient methods with variants of Polyak step-size for quasi-convex optimization with inequality constraints for analogues of sharp minima Open
In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple s…
View article: About some works of Boris Polyak on convergence of gradient methods and their development
About some works of Boris Polyak on convergence of gradient methods and their development Open
The paper presents a review of the state-of-the-art of subgradient and accelerated methods of convex optimization, including in the presence of disturbances and access to various information about the objective function (function value, gr…
View article: Intermediate Gradient Methods with Relative Inexactness
Intermediate Gradient Methods with Relative Inexactness Open
This paper is devoted to first-order algorithms for smooth convex optimization with inexact gradients. Unlike the majority of the literature on this topic, we consider the setting of relative rather than absolute inexactness. More precisel…
View article: Adaptive Methods or Variational Inequalities with Relatively Smooth and Reletively Strongly Monotone Operators
Adaptive Methods or Variational Inequalities with Relatively Smooth and Reletively Strongly Monotone Operators Open
The article is devoted to some adaptive methods for variational inequalities with relatively smooth and relatively strongly monotone operators. Starting from the recently proposed proximal variant of the extragradient method for this class…
View article: Adaptive Variant of the Frank-Wolfe Algorithm for Convex Optimization Problems
Adaptive Variant of the Frank-Wolfe Algorithm for Convex Optimization Problems Open
Some variant of the Frank-Wolfe method for convex optimization problems with adaptive selection of the step parameter corresponding to information about the smoothness of the objective function (the Lipschitz constant of the gradient). The…
View article: Gradient-Type Method for Optimization Problems with Polyak-Lojasiewicz Condition: Relative Inexactness in Gradient and Adaptive Parameters Setting
Gradient-Type Method for Optimization Problems with Polyak-Lojasiewicz Condition: Relative Inexactness in Gradient and Adaptive Parameters Setting Open
We consider minimization problems with the well-known Polya-Lojasievich condition and Lipshitz-continuous gradient. Such problem occurs in different places in machine learning and related fields. Furthermore, we assume that a gradient is a…
View article: Highly Smoothness Zero-Order Methods for Solving Optimization Problems under PL Condition
Highly Smoothness Zero-Order Methods for Solving Optimization Problems under PL Condition Open
In this paper, we study the black box optimization problem under the Polyak--Lojasiewicz (PL) condition, assuming that the objective function is not just smooth, but has higher smoothness. By using "kernel-based" approximation instead of t…
View article: Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum Open
Работа посвящена исследованию субградиентных методов с различными вариациями шага Б.Т. Поляка на классах задач минимизации слабо выпуклых и относительно слабо выпуклых функций, обладающих соответствующим аналогом острого минимума. Оказывае…
View article: Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods Open
Данная статья посвящена повышению скоростных гарантий численных методов градиентного типа для относительно гладких и относительно липшицевых задач минимизации в случае дополнительных предположений о некоторых аналогах сильной выпуклости це…
View article: Online Optimization Problems with Functional Constraints under Relative Lipschitz Continuity and Relative Strong Convexity Conditions
Online Optimization Problems with Functional Constraints under Relative Lipschitz Continuity and Relative Strong Convexity Conditions Open
Recently, there were introduced important classes of relatively smooth, relatively continuous, and relatively strongly convex optimization problems. These concepts have significantly expanded the class of problems for which optimal complex…
View article: Solving strongly convex-concave composite saddle-point problems with low dimension of one group of variable
Solving strongly convex-concave composite saddle-point problems with low dimension of one group of variable Open
Algorithmic methods are developed that guarantee efficient complexity estimates for strongly convex-concave saddle-point problems in the case when one group of variables has a high dimension, while another has a rather low dimension (up to…
View article: Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter
Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter Open
Due to its applications in many different places in machine learning and other connected engineering applications, the problem of minimization of a smooth function that satisfies the Polyak-Łojasiewicz condition receives much attention fro…
View article: Gradient-Type Methods For Decentralized Optimization Problems With Polyak-Łojasiewicz Condition Over Time-Varying Networks
Gradient-Type Methods For Decentralized Optimization Problems With Polyak-Łojasiewicz Condition Over Time-Varying Networks Open
This paper focuses on the decentralized optimization (minimization and saddle point) problems with objective functions that satisfy Polyak-Łojasiewicz condition (PL-condition). The first part of the paper is devoted to the minimization pro…
View article: Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness
Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness Open
In this paper, we introduce some adaptive methods for solving variational inequalities with relatively strongly monotone operators. Firstly, we focus on the modification of the recently proposed, in smooth case [1], adaptive numerical meth…
View article: Stopping Rules for Gradient Methods for Non-Convex Problems with Additive Noise in Gradient
Stopping Rules for Gradient Methods for Non-Convex Problems with Additive Noise in Gradient Open
We study the gradient method under the assumption that an additively inexact gradient is available for, generally speaking, non-convex problems. The non-convexity of the objective function, as well as the use of an inexactness specified gr…
View article: Adaptive first-order methods for relatively strongly convex optimization problems
Adaptive first-order methods for relatively strongly convex optimization problems Open
Настоящая статья посвящена некоторым адаптивным методам первого порядка для оптимизационных задач с относительно сильно выпуклыми функционалами. Недавно возникшее в оптимизации понятие относительной сильной выпуклости существенно расширяет…
View article: Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum Open
In this paper we propose a generalized condition for a sharp minimum,\nsomewhat similar to the inexact oracle proposed recently by\nDevolder-Glineur-Nesterov. The proposed approach makes it possible to extend\nthe class of applicability of…