Nicolas Keriven
YOU?
Author Swipe
View article: Graphical Kernel Ridge Regression in Latent Position Models
Graphical Kernel Ridge Regression in Latent Position Models Open
International audience
View article: Taxonomy of reduction matrices for Graph Coarsening
Taxonomy of reduction matrices for Graph Coarsening Open
Graph coarsening aims to diminish the size of a graph to lighten its memory footprint, and has numerous applications in graph signal processing and machine learning. It is usually defined using a reduction matrix and a lifting matrix, whic…
View article: Backward Oversmoothing: why is it hard to train deep Graph Neural Networks?
Backward Oversmoothing: why is it hard to train deep Graph Neural Networks? Open
Oversmoothing has long been identified as a major limitation of Graph Neural Networks (GNNs): input node features are smoothed at each layer and converge to a non-informative representation, if the weights of the GNN are sufficiently bound…
View article: Node Regression on Latent Position Random Graphs via Local Averaging
Node Regression on Latent Position Random Graphs via Local Averaging Open
Node regression consists in predicting the value of a graph label at a node, given observations at the other nodes. To gain some insight into the performance of various estimators for this task, we perform a theoretical study in a context …
View article: Seeking universal approximation for continuous counterparts of GNNs on large random graphs
Seeking universal approximation for continuous counterparts of GNNs on large random graphs Open
International audience
View article: Graph Coarsening with Message-Passing Guarantees
Graph Coarsening with Message-Passing Guarantees Open
Graph coarsening aims to reduce the size of a large graph while preserving some of its key properties, which has been used in many applications to reduce computational load and memory footprint. For instance, in graph machine learning, tra…
View article: Entropic Optimal Transport on Random Graphs
Entropic Optimal Transport on Random Graphs Open
International audience
View article: Convergence of Graph Neural Networks with generic aggregation functions on random graphs
Convergence of Graph Neural Networks with generic aggregation functions on random graphs Open
National audience
View article: Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Random Graphs
Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Random Graphs Open
International audience
View article: What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding
What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding Open
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power. Existing analyses relate this notion to the graph isomorphism problem, which is mostly relevant for gra…
View article: Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs Open
We study the convergence of message passing graph neural networks on random graph models to their continuous counterpart as the number of nodes tends to infinity. Until now, this convergence was only known for architectures with aggregatio…
View article: Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs Open
We study the convergence of message passing graph neural networks on random graph models to their continuous counterpart as the number of nodes tends to infinity. Until now, this convergence was only known for architectures with aggregatio…
View article: Gradient scarcity with Bilevel Optimization for Graph Learning
Gradient scarcity with Bilevel Optimization for Graph Learning Open
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarcity. That is, learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled one…
View article: Supervised Learning of Analysis-Sparsity Priors With Automatic Differentiation
Supervised Learning of Analysis-Sparsity Priors With Automatic Differentiation Open
Sparsity priors are commonly used in denoising and image reconstruction. For\nanalysis-type priors, a dictionary defines a representation of signals that is\nlikely to be sparse. In most situations, this dictionary is not known, and is\nto…
View article: Stability of Entropic Wasserstein Barycenters and application to random geometric graphs
Stability of Entropic Wasserstein Barycenters and application to random geometric graphs Open
As interest in graph data has grown in recent years, the computation of various geometric tools has become essential. In some area such as mesh processing, they often rely on the computation of geodesics and shortest paths in discretized m…
View article: Not too little, not too much: a theoretical analysis of graph (over)smoothing
Not too little, not too much: a theoretical analysis of graph (over)smoothing Open
We analyze graph smoothing with \emph{mean aggregation}, where each node successively receives the average of the features of its neighbors. Indeed, it has quickly been observed that Graph Neural Networks (GNNs), which generally follow som…
View article: Entropic Optimal Transport in Random Graphs
Entropic Optimal Transport in Random Graphs Open
In graph analysis, a classic task consists in computing similarity measures between (groups of) nodes. In latent space random graphs, nodes are associated to unknown latent variables. One may then seek to compute distances directly in the …
View article: Sparse and smooth: Improved guarantees for spectral clustering in the dynamic stochastic block model
Sparse and smooth: Improved guarantees for spectral clustering in the dynamic stochastic block model Open
In this paper, we analyse classical variants of the Spectral Clustering (SC) algorithm in the Dynamic Stochastic Block Model (DSBM). Existing results show that, in the relatively sparse case where the expected degree grows logarithmically …
View article: Supervised learning of analysis-sparsity priors with automatic differentiation
Supervised learning of analysis-sparsity priors with automatic differentiation Open
Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be…
View article: Sketching Data Sets for Large-Scale Learning: Keeping only what you need
Sketching Data Sets for Large-Scale Learning: Keeping only what you need Open
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is…
View article: Statistical learning guarantees for compressive clustering and compressive mixture modeling
Statistical learning guarantees for compressive clustering and compressive mixture modeling Open
We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning , a general framework for resource-efficient large-scale learning that we introduced in a companion paper. Th…
View article: Compressive statistical learning with random feature moments
Compressive statistical learning with random feature moments Open
We describe a general framework — compressive statistical learning — for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized mom…
View article: On the Universality of Graph Neural Networks on Large Random Graphs
On the Universality of Graph Neural Networks on Large Random Graphs Open
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs. In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs, which directly enables a study of their…
View article: Fast Graph Kernel with Optical Random Features
Fast Graph Kernel with Optical Random Features Open
The graphlet kernel is a classical method in graph classification. It however suffers from a high computation cost due to the isomorphism test it includes. As a generic proxy, and in general at the cost of losing some information, this tes…
View article: Sketching Datasets for Large-Scale Learning (long version)
Sketching Datasets for Large-Scale Learning (long version) Open
It is an extended version of https://hal.inria.fr/hal-03350599 (official version published with DOI: https://doi.org/10.1109/MSP.2021.3092574) with additional references and more in-depth discussions on a variety of topics. A python notebo…
View article: Sketching Datasets for Large-Scale Learning (long version)
Sketching Datasets for Large-Scale Learning (long version) Open
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is…
View article: Convergence and Stability of Graph Convolutional Networks on Large Random Graphs
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs Open
We study properties of Graph Convolutional Networks (GCNs) by analyzing their behavior on standard models of random graphs, where nodes are represented by random latent variables and edges are drawn according to a similarity kernel. This a…
View article: Convergence and Stability of Graph Convolutional Networks on Large\n Random Graphs
Convergence and Stability of Graph Convolutional Networks on Large\n Random Graphs Open
We study properties of Graph Convolutional Networks (GCNs) by analyzing their\nbehavior on standard models of random graphs, where nodes are represented by\nrandom latent variables and edges are drawn according to a similarity kernel.\nThi…