Benjamin Dupuis
YOU?
Author Swipe
View article: Mutual Information Free Topological Generalization Bounds via Stability
Mutual Information Free Topological Generalization Bounds via Stability Open
Providing generalization guarantees for stochastic optimization algorithms is a major challenge in modern learning theory. Recently, several studies highlighted the impact of the geometry of training trajectories on the generalization erro…
View article: Algorithm- and Data-Dependent Generalization Bounds for Score-Based Generative Models
Algorithm- and Data-Dependent Generalization Bounds for Score-Based Generative Models Open
Score-based generative models (SGMs) have emerged as one of the most popular classes of generative models. A substantial body of work now exists on the analysis of SGMs, focusing either on discretization aspects or on their statistical per…
View article: Understanding the Generalization Error of Markov algorithms through Poissonization
Understanding the Generalization Error of Markov algorithms through Poissonization Open
Using continuous-time stochastic differential equation (SDE) proxies to stochastic optimization algorithms has proven fruitful for understanding their generalization abilities. A significant part of these approaches are based on the so-cal…
View article: Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms
Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms Open
We present a novel set of rigorous and computationally efficient topology-based complexity notions that exhibit a strong correlation with the generalization gap in modern deep neural networks (DNNs). DNNs show remarkable generalization pro…
View article: Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets
Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets Open
We propose data-dependent uniform generalization bounds by approaching the problem from a PAC-Bayesian perspective. We first apply the PAC-Bayesian framework on "random sets" in a rigorous way, where the training algorithm is assumed to ou…
View article: Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation
Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation Open
Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years. While illuminating interesting aspects of stochastic optimizers by using heavy-tailed st…
View article: From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD
From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD Open
Understanding the generalization abilities of modern machine learning algorithms has been a major research topic over the past decades. In recent years, the learning dynamics of Stochastic Gradient Descent (SGD) have been related to heavy-…
View article: Generalization Bounds with Data-dependent Fractal Dimensions
Generalization Bounds with Data-dependent Fractal Dimensions Open
Providing generalization guarantees for modern neural networks has been a crucial task in statistical learning. Recently, several studies have attempted to analyze the generalization error in such settings by using tools from fractal geome…
View article: DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent\n Kernel
DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent\n Kernel Open
We study the Solid Isotropic Material Penalisation (SIMP) method with a\ndensity field generated by a fully-connected neural network, taking the\ncoordinates as inputs. In the large width limit, we show that the use of DNNs\nleads to a fil…
View article: DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel
DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel Open
We study the Solid Isotropic Material Penalisation (SIMP) method with a density field generated by a fully-connected neural network, taking the coordinates as inputs. In the large width limit, we show that the use of DNNs leads to a filter…
View article: Trajectoires bohmiennes de différents systèmes quantiques
Trajectoires bohmiennes de différents systèmes quantiques Open