Mohammad Alkousa
YOU?
Author Swipe
View article: Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems
Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems Open
The part of the analysis of the convergence rate of the mirror descent method that is connected with the adaptive time-varying step size rules due to Alkousa et al. (MOTOR 2024, pp. 3-18) is corrected. Moreover, a Lipschitz-free mirror des…
View article: Mirror Descent Methods with Weighting Scheme for Outputs for Constrained Variational Inequality Problems
Mirror Descent Methods with Weighting Scheme for Outputs for Constrained Variational Inequality Problems Open
This paper is devoted to the variational inequality problems. We consider two classes of problems, the first is classical constrained variational inequality and the second is the same problem with functional (inequality type) constraints. …
View article: On quasi-convex smooth optimization problems by a comparison oracle
On quasi-convex smooth optimization problems by a comparison oracle Open
Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, m…
View article: Accelerated Bregman gradient methods for relatively smooth and relatively Lipschitz continuous minimization problems
Accelerated Bregman gradient methods for relatively smooth and relatively Lipschitz continuous minimization problems Open
In this paper, we propose some accelerated methods for solving optimization problems under the condition of relatively smooth and relatively Lipschitz continuous functions with an inexact oracle. We consider the problem of minimizing the c…
View article: Mixed Newton Method for Optimization in Complex Spaces
Mixed Newton Method for Optimization in Complex Spaces Open
In this paper, we modify and apply the recently introduced Mixed Newton Method, which is originally designed for minimizing real-valued functions of complex variables, to the minimization of real-valued functions of real variables by exten…
View article: Universal methods for variational inequalities: deterministic and stochastic cases
Universal methods for variational inequalities: deterministic and stochastic cases Open
In this paper, we propose universal proximal mirror methods to solve the variational inequality problem with Holder continuous operators in both deterministic and stochastic settings. The proposed methods automatically adapt not only to th…
View article: Higher Degree Inexact Model for Optimization problems
Higher Degree Inexact Model for Optimization problems Open
In this paper, it was proposed a new concept of the inexact higher degree $(δ, L, q)$-model of a function that is a generalization of the inexact $(δ, L)$-model, $(δ, L)$-oracle and $(δ, L)$-oracle of degree $q \in [0,2)$. Some examples we…
View article: Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum Open
В работе рассмотрено два варианта понятия острого минимума для задач математического программирования с квазивыпуклой целевой функцией и ограничениями-неравенствами. Исследована задача описания варианта простого субградиентного метода с пе…
View article: Optimal Convergence Rate for Mirror Descent Methods with special Time-Varying Step Sizes Rules
Optimal Convergence Rate for Mirror Descent Methods with special Time-Varying Step Sizes Rules Open
In this paper, the optimal convergence rate $O\left(N^{-1/2}\right)$ (where $N$ is the total number of iterations performed by the algorithm), without the presence of a logarithmic factor, is proved for mirror descent algorithms with speci…
View article: Mirror Descent Methods with Weighting Scheme for Outputs for Optimization Problems with Functional Constraints
Mirror Descent Methods with Weighting Scheme for Outputs for Optimization Problems with Functional Constraints Open
This paper is devoted to new mirror descent-type methods with switching between two types of iteration points (productive and non-productive) for constrained convex optimization problems with several convex functional (inequality-type) con…
View article: On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle
On Quasi-Convex Smooth Optimization Problems by a Comparison Oracle Open
Frequently, when dealing with many machine learning models, optimization problems appear to be challenging due to a limited understanding of the constructions and characterizations of the objective functions in these problems. Therefore, m…
View article: Subgradient methods with variants of Polyak step-size for quasi-convex optimization with inequality constraints for analogues of sharp minima
Subgradient methods with variants of Polyak step-size for quasi-convex optimization with inequality constraints for analogues of sharp minima Open
In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple s…
View article: Intermediate Gradient Methods with Relative Inexactness
Intermediate Gradient Methods with Relative Inexactness Open
This paper is devoted to first-order algorithms for smooth convex optimization with inexact gradients. Unlike the majority of the literature on this topic, we consider the setting of relative rather than absolute inexactness. More precisel…
View article: Adaptive Methods or Variational Inequalities with Relatively Smooth and Reletively Strongly Monotone Operators
Adaptive Methods or Variational Inequalities with Relatively Smooth and Reletively Strongly Monotone Operators Open
The article is devoted to some adaptive methods for variational inequalities with relatively smooth and relatively strongly monotone operators. Starting from the recently proposed proximal variant of the extragradient method for this class…
View article: Adaptive Variant of the Frank-Wolfe Algorithm for Convex Optimization Problems
Adaptive Variant of the Frank-Wolfe Algorithm for Convex Optimization Problems Open
Some variant of the Frank-Wolfe method for convex optimization problems with adaptive selection of the step parameter corresponding to information about the smoothness of the objective function (the Lipschitz constant of the gradient). The…
View article: Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum Open
Работа посвящена исследованию субградиентных методов с различными вариациями шага Б.Т. Поляка на классах задач минимизации слабо выпуклых и относительно слабо выпуклых функций, обладающих соответствующим аналогом острого минимума. Оказывае…
View article: Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods Open
Данная статья посвящена повышению скоростных гарантий численных методов градиентного типа для относительно гладких и относительно липшицевых задач минимизации в случае дополнительных предположений о некоторых аналогах сильной выпуклости це…
View article: Online Optimization Problems with Functional Constraints under Relative Lipschitz Continuity and Relative Strong Convexity Conditions
Online Optimization Problems with Functional Constraints under Relative Lipschitz Continuity and Relative Strong Convexity Conditions Open
Recently, there were introduced important classes of relatively smooth, relatively continuous, and relatively strongly convex optimization problems. These concepts have significantly expanded the class of problems for which optimal complex…
View article: Solving strongly convex-concave composite saddle-point problems with low dimension of one group of variable
Solving strongly convex-concave composite saddle-point problems with low dimension of one group of variable Open
Algorithmic methods are developed that guarantee efficient complexity estimates for strongly convex-concave saddle-point problems in the case when one group of variables has a high dimension, while another has a rather low dimension (up to…
View article: Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter
Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter Open
Due to its applications in many different places in machine learning and other connected engineering applications, the problem of minimization of a smooth function that satisfies the Polyak-Łojasiewicz condition receives much attention fro…
View article: Gradient-Type Methods For Decentralized Optimization Problems With Polyak-Łojasiewicz Condition Over Time-Varying Networks
Gradient-Type Methods For Decentralized Optimization Problems With Polyak-Łojasiewicz Condition Over Time-Varying Networks Open
This paper focuses on the decentralized optimization (minimization and saddle point) problems with objective functions that satisfy Polyak-Łojasiewicz condition (PL-condition). The first part of the paper is devoted to the minimization pro…
View article: Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness
Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness Open
In this paper, we introduce some adaptive methods for solving variational inequalities with relatively strongly monotone operators. Firstly, we focus on the modification of the recently proposed, in smooth case [1], adaptive numerical meth…
View article: Optimization in complex spaces with the Mixed Newton Method
Optimization in complex spaces with the Mixed Newton Method Open
We propose a second-order method for unconditional minimization of functions $f(z)$ of complex arguments. We call it the Mixed Newton Method due to the use of the mixed Wirtinger derivative $\frac{\partial^2f}{\partial\bar z\partial z}$ fo…
View article: An approach for the nonconvex uniformly concave structured saddle point problem
An approach for the nonconvex uniformly concave structured saddle point problem Open
В последнее время седловым задачам уделяется большое внимание благодаря их мощным возможностям моделирования для множества задач из различных областей. Приложения этих задач встречаются в многочисленных современных прикладных областях, так…
View article: Adaptive first-order methods for relatively strongly convex optimization problems
Adaptive first-order methods for relatively strongly convex optimization problems Open
Настоящая статья посвящена некоторым адаптивным методам первого порядка для оптимизационных задач с относительно сильно выпуклыми функционалами. Недавно возникшее в оптимизации понятие относительной сильной выпуклости существенно расширяет…
View article: Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum Open
In this paper we propose a generalized condition for a sharp minimum,\nsomewhat similar to the inexact oracle proposed recently by\nDevolder-Glineur-Nesterov. The proposed approach makes it possible to extend\nthe class of applicability of…
View article: An Approach for Non-Convex Uniformly Concave Structured Saddle Point Problem
An Approach for Non-Convex Uniformly Concave Structured Saddle Point Problem Open
Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, dis…
View article: Some Methods for Relatively Strongly Monotone Variational Inequalities
Some Methods for Relatively Strongly Monotone Variational Inequalities Open
The article is devoted to the development of numerical methods for solving variational inequalities with relatively strongly monotone operators. We consider two classes of variational inequalities related to some analogs of the Lipschitz c…
View article: Gradient-Type Adaptive Methods for Relatively Lipschitz Convex Optimization Problems
Gradient-Type Adaptive Methods for Relatively Lipschitz Convex Optimization Problems Open
Recently there were proposed some innovative convex optimization concepts, namely, relative smoothness [1] and relative strong convexity [2,3]. These approaches have significantly expanded the class of applicability of gradient-type method…