Pascal Bianchi
YOU?
Author Swipe
View article: Consensus-Based Optimization Beyond Finite-Time Analysis
Consensus-Based Optimization Beyond Finite-Time Analysis Open
We analyze a zeroth-order particle algorithm for the global optimization of a non-convex function, focusing on a variant of Consensus-Based Optimization (CBO) with small but fixed noise intensity. Unlike most previous studies restricted to…
View article: Stochastic mirror descent for nonparametric adaptive importance sampling
Stochastic mirror descent for nonparametric adaptive importance sampling Open
This paper addresses the problem of approximating an unknown probability distribution with density $f$ -- which can only be evaluated up to an unknown scaling factor -- with the help of a sequential algorithm that produces at each iteratio…
View article: On the importance of wind predictions in wake steering optimization
On the importance of wind predictions in wake steering optimization Open
Wake steering is a technique that optimizes the energy production of a wind farm by employing yaw control to misalign upstream turbines with the incoming wind direction. This work highlights the important dependence between wind direction …
View article: Long-time asymptotics of noisy SVGD outside the population limit
Long-time asymptotics of noisy SVGD outside the population limit Open
Stein Variational Gradient Descent (SVGD) is a widely used sampling algorithm that has been successfully applied in several areas of Machine Learning. SVGD operates by iteratively moving a set of interacting particles (which represent the …
View article: Comment on wes-2023-172
Comment on wes-2023-172 Open
Abstract. Wake steering is a technique that optimises the energy production of a wind farm by employing yaw control to misalign upstream turbines with the incoming wind direction. This work highlights the important dependence between wind …
View article: On the importance of wind predictions in wake steering optimization
On the importance of wind predictions in wake steering optimization Open
Wake steering is a technique that optimises the energy production of a wind farm by employing yaw control to misalign upstream turbines with the incoming wind direction. This work highlights the important dependence between wind direction …
View article: Stochastic Subgradient Descent Escapes Active Strict Saddles on Weakly Convex Functions
Stochastic Subgradient Descent Escapes Active Strict Saddles on Weakly Convex Functions Open
In nonsmooth stochastic optimization, we establish the nonconvergence of the stochastic subgradient descent (SGD) to the critical points recently called active strict saddles by Davis and Drusvyatskiy. Such points lie on a manifold M, wher…
View article: A closed-measure approach to stochastic approximation
A closed-measure approach to stochastic approximation Open
This paper introduces a new method to tackle the issue of the almost sure convergence of stochastic approximation algorithms defined from a differential inclusion. Under the assumption of slowly decaying step-sizes, we establish that the s…
View article: Stochastic Subgradient Descent Escapes Active Strict Saddles on Weakly Convex Functions
Stochastic Subgradient Descent Escapes Active Strict Saddles on Weakly Convex Functions Open
In non-smooth stochastic optimization, we establish the non-convergence of the stochastic subgradient descent (SGD) to the critical points recently called active strict saddles by Davis and Drusvyatskiy. Such points lie on a manifold $M$ w…
View article: Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation
Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation Open
Actor-critic methods integrating target networks have exhibited a stupendous empirical success in deep reinforcement learning. However, a theoretical understanding of the use of target networks in actor-critic methods is largely missing in…
View article: Convergence and Dynamical Behavior of the ADAM Algorithm for Nonconvex Stochastic Optimization
Convergence and Dynamical Behavior of the ADAM Algorithm for Nonconvex Stochastic Optimization Open
Adam is a popular variant of stochastic gradient descent for finding a local\nminimizer of a function. In the constant stepsize regime, assuming that the\nobjective function is differentiable and non-convex, we establish the\nconvergence i…
View article: Stochastic optimization with momentum: Convergence, fluctuations, and traps avoidance
Stochastic optimization with momentum: Convergence, fluctuations, and traps avoidance Open
In this paper, a general stochastic optimization procedure is studied, unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algor…
View article: Conditional independence testing via weighted partial copulas and nearest neighbors
Conditional independence testing via weighted partial copulas and nearest neighbors Open
This paper introduces the \textit{weighted partial copula} function for testing conditional independence. The proposed test procedure results from these two ingredients: (i) the test statistic is an explicit Cramer-von Mises transformation…
View article: Convergence of constant step stochastic gradient descent for non-smooth non-convex functions
Convergence of constant step stochastic gradient descent for non-smooth non-convex functions Open
This paper studies the asymptotic behavior of the constant step Stochastic Gradient Descent for the minimization of an unknown function F , defined as the expectation of a non convex, non smooth, locally Lipschitz random function. As the g…
View article: Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization
Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization Open
Although ADAM is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of ADAM have been proposed to circumvent …
View article: Learning Methods for RSSI-based Geolocation: A Comparative Study
Learning Methods for RSSI-based Geolocation: A Comparative Study Open
International audience
View article: Snake: A Stochastic Proximal Gradient Algorithm for Regularized Problems Over Large Graphs
Snake: A Stochastic Proximal Gradient Algorithm for Regularized Problems Over Large Graphs Open
A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over …
View article: A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions
A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions Open
This paper introduces a coordinate descent version of the V\\~u-Condat\nalgorithm. By coordinate descent, we mean that only a subset of the coordinates\nof the primal and dual iterates is updated at each iteration, the other\ncoordinates b…
View article: Constant step stochastic approximations involving differential inclusions: stability, long-run convergence and applications
Constant step stochastic approximations involving differential inclusions: stability, long-run convergence and applications Open
International audience
View article: Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex Stochastic Optimization
Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex Stochastic Optimization Open
Adam is a popular variant of stochastic gradient descent for finding a local minimizer of a function. In the constant stepsize regime, assuming that the objective function is differentiable and non-convex, we establish the convergence in t…
View article: A Constant Step Stochastic Douglas-Rachford Algorithm with Application to non Separable Regularizations
A Constant Step Stochastic Douglas-Rachford Algorithm with Application to non Separable Regularizations Open
The Douglas Rachford algorithm is an algorithm that converges to a minimizer of a sum of two convex functions. The algorithm consists in fixed point iterations involving computations of the proximity operators of the two functions separate…
View article: Distributed Approach For Deblurring Large Images With Shift-Variant Blur
Distributed Approach For Deblurring Large Images With Shift-Variant Blur Open
Publication in the conference proceedings of EUSIPCO, Kos island, Greece, 2017
View article: Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems\n over Large Graphs
Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems\n over Large Graphs Open
A regularized optimization problem over a large unstructured graph is\nstudied, where the regularization term is tied to the graph geometry. Typical\nregularization examples include the total variation and the Laplacian\nregularizations ov…
View article: An adaptive distributed asynchronous algorithm with application to target localization
An adaptive distributed asynchronous algorithm with application to target localization Open
International audience
View article: Distributed approach for deblurring large images with shift-variant blur
Distributed approach for deblurring large images with shift-variant blur Open
International audience