Raphael A. Meyer
YOU?
Author Swipe
View article: Faster Linear Algebra Algorithms with Structured Random Matrices
Faster Linear Algebra Algorithms with Structured Random Matrices Open
To achieve the greatest possible speed, practitioners regularly implement randomized algorithms for low-rank approximation and least-squares regression with structured dimension reduction maps. Despite significant research effort, basic qu…
View article: Debiasing Polynomial and Fourier Regression
Debiasing Polynomial and Fourier Regression Open
We study the problem of approximating an unknown function $f:\mathbb{R}\to\mathbb{R}$ by a degree-$d$ polynomial using as few function evaluations as possible, where error is measured with respect to a probability distribution $μ$. Existin…
View article: Does block size matter in randomized block Krylov low-rank approximation?
Does block size matter in randomized block Krylov low-rank approximation? Open
We study the problem of computing a rank-$k$ approximation of a matrix using randomized block Krylov iteration. Prior work has shown that, for block size $b = 1$ or $b = k$, a $(1 + \varepsilon)$-factor approximation to the best rank-$k$ a…
View article: Understanding the Kronecker Matrix-Vector Complexity of Linear Algebra
Understanding the Kronecker Matrix-Vector Complexity of Linear Algebra Open
We study the computational model where we can access a matrix $\mathbf{A}$ only by computing matrix-vector products $\mathbf{A}\mathrm{x}$ for vectors of the form $\mathrm{x} = \mathrm{x}_1 \otimes \cdots \otimes \mathrm{x}_q$. We prove ex…
View article: Algorithm-agnostic low-rank approximation of operator monotone matrix functions
Algorithm-agnostic low-rank approximation of operator monotone matrix functions Open
Low-rank approximation of a matrix function, $f(A)$, is an important task in computational mathematics. Most methods require direct access to $f(A)$, which is often considerably more expensive than accessing $A$. Persson and Kressner (SIMA…
View article: Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation
Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation Open
We study the problem of estimating the trace of a matrix $\mathbf{A}$ that can only be accessed through Kronecker-matrix-vector products. That is, for any Kronecker-structured vector $\mathrm{x} = \otimes_{i=1}^k \mathrm{x}_i$, we can comp…
View article: On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation
On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation Open
Krylov subspace methods are a ubiquitous tool for computing near-optimal rank $k$ approximations of large matrices. While "large block" Krylov methods with block size at least $k$ give the best known theoretical guarantees, block size one …
View article: Near-Linear Sample Complexity for $L_p$ Polynomial Regression
Near-Linear Sample Complexity for $L_p$ Polynomial Regression Open
We study $L_p$ polynomial regression. Given query access to a function $f:[-1,1] \rightarrow \mathbb{R}$, the goal is to find a degree $d$ polynomial $\hat{q}$ such that, for a given parameter $\varepsilon > 0$, $$ \|\hat{q}-f\|_p\le (1+\v…
View article: Fast Regression for Structured Inputs
Fast Regression for Structured Inputs Open
We study the $\ell_p$ regression problem, which requires finding $\mathbf{x}\in\mathbb R^{d}$ that minimizes $\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_p$ for a matrix $\mathbf{A}\in\mathbb R^{n \times d}$ and response vector $\mathbf{b}\in\math…
View article: Hutch++: Optimal Stochastic Trace Estimation
Hutch++: Optimal Stochastic Trace Estimation Open
We study the problem of estimating the trace of a matrix $A$ that can only be accessed through matrix-vector multiplication. We introduce a new randomized algorithm, Hutch++, which computes a $(1 \pm ε)$ approximation to $tr(A)$ for any po…
View article: The Statistical Cost of Robust Kernel Hyperparameter Tuning
The Statistical Cost of Robust Kernel Hyperparameter Tuning Open
This paper studies the statistical complexity of kernel hyperparameter tuning in the setting of active regression under adversarial noise. We consider the problem of finding the best interpolant from a class of kernels with unknown hyperpa…
View article: Optimality Implies Kernel Sum Classifiers are Statistically Efficient
Optimality Implies Kernel Sum Classifiers are Statistically Efficient Open
We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers. This contrasts the typical learning theoretic results which hold for all (potenti…