Adrien Saumard
YOU?
Author Swipe
View article: On the pointwise and sup-norm errors for local regression estimators
On the pointwise and sup-norm errors for local regression estimators Open
In this paper, we analyze the behavior of various non-parametric local regression estimators, i.e. estimators that are based on local averaging, for estimating a Lipschitz regression function at a fixed point, or in sup-norm. We first prov…
View article: A theory of shape regularity for local regression maps
A theory of shape regularity for local regression maps Open
We introduce the concept of shape-regular regression maps as a framework to derive optimal rates of convergence for various non-parametric local regression estimators. Using Vapnik-Chervonenkis theory, we establish upper and lower bounds o…
View article: Covariance inequalities for convex and log-concave functions
Covariance inequalities for convex and log-concave functions Open
Extending results of Hargé and Hu for the Gaussian measure, we prove inequalities for the covariance Cov µ (f, g) where µ is a general product probability measure on R d and f, g : R d → R satisfy some convexity or log-concavity assumption…
View article: Phase transitions for support recovery under local differential privacy
Phase transitions for support recovery under local differential privacy Open
We address the problem of variable selection in a high-dimensional but sparse mean model, under the additional constraint that only privatized data are available for inference. The original data are vectors with independent entries having …
View article: Covariance inequalities for convex and log-concave functions
Covariance inequalities for convex and log-concave functions Open
Extending results of Harg{é} and Hu for the Gaussian measure, we prove inequalities for the covariance Cov$_μ(f, g)$ where $μ$ is a general product probability measure on $\mathbb{R}^d$ and $f,g: \mathbb{R}^d \to \mathbb{R}$ satisfy some c…
View article: Topics in robust statistical learning
Topics in robust statistical learning Open
Some recent contributions to robust inference are presented. Firstly, the classical problem of robust M-estimation of a location parameter is revisited using an optimal transport approach - with specifically designed Wasserstein-type dista…
View article: Relaxing the Gaussian assumption in shrinkage and SURE in high dimension
Relaxing the Gaussian assumption in shrinkage and SURE in high dimension Open
Shrinkage estimation is a fundamental tool of modern statistics, pioneered by\nCharles Stein upon his discovery of the famous paradox involving the\nmultivariate Gaussian. A large portion of the subsequent literature only\nconsiders the ef…
View article: High-dimensional logistic entropy clustering
High-dimensional logistic entropy clustering Open
Minimization of the (regularized) entropy of classification probabilities is a versatile class of discriminative clustering methods. The classification probabilities are usually defined through the use of some classical losses from supervi…
View article: High dimensional logistic entropy clustering
High dimensional logistic entropy clustering Open
Minimization of the (regularized) entropy of classification probabilities is a versatile class of discriminative clustering methods. The classification probabilities are usually defined through the use of some classical losses from supervi…
View article: Finite Sample Improvement of Akaike’s Information Criterion
Finite Sample Improvement of Akaike’s Information Criterion Open
We emphasize that it is possible to improve the principle of unbiased risk estimation for model selection by addressing excess risk deviations in the design of penalization procedures. Indeed, we propose a modification of Akaike's Informat…
View article: Sharp phase transitions for exact support recovery under local differential privacy.
Sharp phase transitions for exact support recovery under local differential privacy. Open
We address the problem of variable selection in the Gaussian mean model in $\mathbb{R}^d$ under the additional constraint that only privatised data are available for inference. For this purpose, we adopt a recent generalisation of classica…
View article: Phase transitions for support recovery under local differential privacy
Phase transitions for support recovery under local differential privacy Open
We address the problem of variable selection in a high-dimensional but sparse mean model, under the additional constraint that only privatised data are available for inference. The original data are vectors with independent entries having …
View article: Local differential privacy: Elbow effect in optimal density estimation and adaptation over Besov ellipsoids
Local differential privacy: Elbow effect in optimal density estimation and adaptation over Besov ellipsoids Open
We address the problem of non-parametric density estimation under the additional constraint that only privatised data are allowed to be published and available for inference. For this purpose, we adopt a recent generalisation of classical …
View article: Relaxing the Gaussian assumption in Shrinkage and SURE in high dimension
Relaxing the Gaussian assumption in Shrinkage and SURE in high dimension Open
Shrinkage estimation is a fundamental tool of modern statistics, pioneered by Charles Stein upon his discovery of the famous paradox involving the multivariate Gaussian. A large portion of the subsequent literature only considers the effic…
View article: Weighted Poincaré inequalities, concentration inequalities and tail bounds related to Stein kernels in dimension one
Weighted Poincaré inequalities, concentration inequalities and tail bounds related to Stein kernels in dimension one Open
We investigate links between the so-called Stein's density approach in dimension one and some functional and concentration inequalities. We show that measures having a finite first moment and a density with connected support satisfy a weig…
View article: On the isoperimetric constant, covariance inequalities and $L_{p}$-Poincaré inequalities in dimension one
On the isoperimetric constant, covariance inequalities and $L_{p}$-Poincaré inequalities in dimension one Open
Firstly, we derive in dimension one a new covariance inequality of $L_{1}-L_{\infty}$ type that characterizes the isoperimetric constant as the best constant achieving the inequality. Secondly, we generalize our result to $L_{p}-L_{q}$ bou…
View article: Bi-log-concavity: some properties and some remarks towards a\n multi-dimensional extension
Bi-log-concavity: some properties and some remarks towards a\n multi-dimensional extension Open
Bi-log-concavity of probability measures is a univariate extension of the\nnotion of log-concavity that has been recently proposed in a statistical\nliterature. Among other things, it has the nice property from a modelisation\nperspective …
View article: Bi-log-concavity: some properties and some remarks towards a multi-dimensional extension
Bi-log-concavity: some properties and some remarks towards a multi-dimensional extension Open
Bi-log-concavity of probability measures is a univariate extension of the notion of log-concavity that has been recently proposed in a statistical literature. Among other things, it has the nice property from a modelisation perspective to …
View article: Weighted Poincar\'e inequalities, concentration inequalities and tail bounds related to the behavior of the Stein kernel in dimension one
Weighted Poincar\'e inequalities, concentration inequalities and tail bounds related to the behavior of the Stein kernel in dimension one Open
We investigate the links between the so-called Stein's density approach in
dimension one and some functional and concentration inequalities. We show that
measures having a finite first moment and a density with connected support
satisfy a …
View article: Weighted Poincar\\'e inequalities, concentration inequalities and tail\n bounds related to the Stein kernels in dimension one
Weighted Poincar\\'e inequalities, concentration inequalities and tail\n bounds related to the Stein kernels in dimension one Open
We investigate links between the so-called Stein's density approach in\ndimension one and some functional and concentration inequalities. We show that\nmeasures having a finite first moment and a density with connected support\nsatisfy a w…
View article: Model Selection as a Multiple Testing Procedure: Improving Akaike's Information Criterion.
Model Selection as a Multiple Testing Procedure: Improving Akaike's Information Criterion. Open
By interpreting the model selection problem as a multiple hypothesis testing task, we propose a modification of Akaike's Information Criterion that avoids overfitting, even when the sample size is small. We call this correction an over-pen…
View article: On optimality of empirical risk minimization in linear aggregation
On optimality of empirical risk minimization in linear aggregation Open
In the first part of this paper, we show that the small-ball condition,\nrecently introduced by Mendelson (2015), may behave poorly for important\nclasses of localized functions such as wavelets, piecewise polynomials or\ntrigonometric pol…
View article: On the isoperimetric constant, covariance inequalities and\n $L_p$-Poincar\\'{e} inequalities in dimension one
On the isoperimetric constant, covariance inequalities and\n $L_p$-Poincar\\'{e} inequalities in dimension one Open
Firstly, we derive in dimension one a new covariance inequality of\n$L_{1}-L_{\\infty}$ type that characterizes the isoperimetric constant as the\nbest constant achieving the inequality. Secondly, we generalize our result to\n$L_{p}-L_{q}$…
View article: Efron's monotonicity property for measures on $\mathbb{R}^2$
Efron's monotonicity property for measures on $\mathbb{R}^2$ Open
First we prove some kernel representations for the covariance of two functions taken on the same random variable and deduce kernel representations for some functionals of a continuous one-dimensional measure. Then we apply these formulas t…
View article: A concentration inequality for the excess risk in least-squares regression with random design and heteroscedastic noise
A concentration inequality for the excess risk in least-squares regression with random design and heteroscedastic noise Open
We prove a new and general concentration inequality for the excess risk in least-squares regression with random design and heteroscedastic noise. No specific structure is required on the model, except the existence of a suitable function t…
View article: On optimality of empirical risk minimization in linear aggregation
On optimality of empirical risk minimization in linear aggregation Open
In the first part of this paper, we show that the small-ball condition, recently introduced by Mendelson (2015), may behave poorly for important classes of localized functions such as wavelets, piecewise polynomials or trigonometric polyno…