Keaton Hamm
YOU?
Author Swipe
View article: Manifold Learning in Wasserstein Space
Manifold Learning in Wasserstein Space Open
View article: Persistent Classification: Understanding Adversarial Attacks by Studying Decision Boundary Dynamics
Persistent Classification: Understanding Adversarial Attacks by Studying Decision Boundary Dynamics Open
There are a number of hypotheses underlying the existence of adversarial examples for classification problems. These include the high‐dimensionality of the data, the high codimension in the ambient space of the data manifolds of interest, …
View article: Persistent Classification: A New Approach to Stability of Data and Adversarial Examples
Persistent Classification: A New Approach to Stability of Data and Adversarial Examples Open
There are a number of hypotheses underlying the existence of adversarial examples for classification problems. These include the high-dimensionality of the data, high codimension in the ambient space of the data manifolds of interest, and …
View article: On Wasserstein distances for affine transformations of random vectors
On Wasserstein distances for affine transformations of random vectors Open
View article: Manifold learning in Wasserstein space
Manifold learning in Wasserstein space Open
This paper aims at building the theoretical foundations for manifold learning algorithms in the space of absolutely continuous probability measures $\mathcal{P}_{\mathrm{a.c.}}(Ω)$ with $Ω$ a compact and convex subset of $\mathbb{R}^d$, me…
View article: Wasserstein approximation schemes based on Voronoi partitions
Wasserstein approximation schemes based on Voronoi partitions Open
We consider structured approximation of measures in Wasserstein space $\mathrm{W}_p(\mathbb{R}^d)$ for $p\in[1,\infty)$ using general measure approximants compactly supported on Voronoi regions derived from a scaled Voronoi partition of $\…
View article: On Wasserstein distances for affine transformations of random vectors
On Wasserstein distances for affine transformations of random vectors Open
We expound on some known lower bounds of the quadratic Wasserstein distance between random vectors in $\mathbb{R}^n$ with an emphasis on affine transformations that have been used in manifold learning of data in Wasserstein space. In parti…
View article: Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning
Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning Open
.In this paper, we propose Wasserstein Isometric Mapping (Wassmap), a nonlinear dimensionality reduction technique that provides solutions to some drawbacks in existing global nonlinear dimensionality reduction algorithms in imaging applic…
View article: Boosting Nyström Method
Boosting Nyström Method Open
The Nyström method is an effective tool to generate low-rank approximations of large matrices, and it is particularly useful for kernel-based learning. To improve the standard Nyström approximation, ensemble Nyström algorithms compute a mi…
View article: Linearized Wasserstein dimensionality reduction with approximation guarantees
Linearized Wasserstein dimensionality reduction with approximation guarantees Open
We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space. The algorithm is motivated by the observation that many datasets are naturally interpreted as probability measur…
View article: Multi-Priority Graph Sparsification
Multi-Priority Graph Sparsification Open
A \emph{sparsification} of a given graph $G$ is a sparser graph (typically a subgraph) which aims to approximate or preserve some property of $G$. Examples of sparsifications include but are not limited to spanning trees, Steiner trees, sp…
View article: Generalized Pseudoskeleton Decompositions
Generalized Pseudoskeleton Decompositions Open
We characterize some variations of pseudoskeleton (also called CUR) decompositions for matrices and tensors over arbitrary fields. These characterizations extend previous results to arbitrary fields and to decompositions which use generali…
View article: Riemannian CUR Decompositions for Robust Principal Component Analysis
Riemannian CUR Decompositions for Robust Principal Component Analysis Open
Robust Principal Component Analysis (PCA) has received massive attention in recent years. It aims to recover a low-rank matrix and a sparse matrix from their sum. This paper proposes a novel nonconvex Robust PCA algorithm, coined Riemannia…
View article: Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning
Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning Open
In this paper, we propose Wasserstein Isometric Mapping (Wassmap), a nonlinear dimensionality reduction technique that provides solutions to some drawbacks in existing global nonlinear dimensionality reduction algorithms in imaging applica…
View article: Computing Steiner Trees using Graph Neural Networks
Computing Steiner Trees using Graph Neural Networks Open
Graph neural networks have been successful in many learning problems and real-world applications. A recent line of research explores the power of graph neural networks to solve combinatorial and graph algorithmic problems such as subgraph …
View article: On Matrix Factorizations in Subspace Clustering
On Matrix Factorizations in Subspace Clustering Open
This article explores subspace clustering algorithms using CUR decompositions, and examines the effect of various hyperparameters in these algorithms on clustering performance on two real-world benchmark datasets, the Hopkins155 motion seg…
View article: Multi-level weighted additive spanners
Multi-level weighted additive spanners Open
Given a graph G = (V,E), a subgraph H is an additive +β spanner if dist_H(u,v) ≤ dist_G(u,v) + β for all u, v ∈ V. A pairwise spanner is a spanner for which the above inequality is only required to hold for specific pairs P ⊆ V × V given o…
View article: An Operator theoretic approach to the convergence of rearranged Fourier series
An Operator theoretic approach to the convergence of rearranged Fourier series Open
View article: Editorial: Mathematical Fundamentals of Machine Learning
Editorial: Mathematical Fundamentals of Machine Learning Open
EDITORIAL article Front. Appl. Math. Stat., 07 April 2021 | https://doi.org/10.3389/fams.2021.674785
View article: Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions
Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions Open
Low rank tensor approximation is a fundamental tool in modern machine learning and data science. In this paper, we study the characterization, perturbation analysis, and an efficient sampling strategy for two primary tensor CUR approximati…
View article: Weighted Sparse and Lightweight Spanners with Local Additive Error.
Weighted Sparse and Lightweight Spanners with Local Additive Error. Open
An \emph{additive $+\beta$ spanner} of a graph $G$ is a subgraph which preserves shortest paths up to an additive $+\beta$ error. Additive spanners are well-studied in unweighted graphs but have only recently received attention in weighted…
View article: Robust CUR Decomposition: Theory and Imaging Applications
Robust CUR Decomposition: Theory and Imaging Applications Open
This paper considers the use of Robust PCA in a CUR decomposition framework and applications thereof. Our main algorithms produce a robust version of column-row factorizations of matrices $\mathbf{D}=\mathbf{L}+\mathbf{S}$ where $\mathbf{L…
View article: Multi-Level Weighted Additive Spanners
Multi-Level Weighted Additive Spanners Open
Given a graph G = (V,E), a subgraph H is an additive +β spanner if dist_H(u,v) ≤ dist_G(u,v) + β for all u, v ∈ V. A pairwise spanner is a spanner for which the above inequality is only required to hold for specific pairs P ⊆ V × V given o…
View article: On Additive Spanners in Weighted Graphs with Local Error
On Additive Spanners in Weighted Graphs with Local Error Open
View article: Rapid Robust Principal Component Analysis: CUR Accelerated Inexact Low Rank Estimation
Rapid Robust Principal Component Analysis: CUR Accelerated Inexact Low Rank Estimation Open
Robust principal component analysis (RPCA) is a widely used tool for\ndimension reduction. In this work, we propose a novel non-convex algorithm,\ncoined Iterated Robust CUR (IRCUR), for solving RPCA problems, which\ndramatically improves …
View article: Graph spanners: A tutorial review
Graph spanners: A tutorial review Open
View article: Kruskal-based approximation algorithm for the multi-level Steiner tree problem
Kruskal-based approximation algorithm for the multi-level Steiner tree problem Open
We study the multi-level Steiner tree problem: a generalization of the Steiner tree problem in graphs where terminals $T$ require varying priority, level, or quality of service. In this problem, we seek to find a minimum cost tree containi…
View article: Stability of Sampling for CUR Decompositions
Stability of Sampling for CUR Decompositions Open
This article studies how to form CUR decompositions of low-rank matrices via primarily random sampling, though deterministic methods due to previous works are illustrated as well. The primary problem is to determine when a column submatrix…
View article: Stability of sampling for CUR decompositions
Stability of sampling for CUR decompositions Open
This article studies how to form CUR decompositions of low-rank matrices via primarily random sampling, though deterministic methods due to previous works are illustrated as well. The primary problem is to determine when a column submatrix…
View article: Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem
Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem Open
We study the multi-level Steiner tree problem: a generalization of the Steiner tree problem in graphs where terminals T require varying priority, level, or quality of service. In this problem, we seek to find a minimum cost tree containing…