Alex Townsend
YOU?
Author Swipe
View article: Convergence of Pivoted Cholesky Algorithm for Lipschitz Kernels
Convergence of Pivoted Cholesky Algorithm for Lipschitz Kernels Open
We investigate the continuous analogue of the Cholesky factorization, namely the pivoted Cholesky algorithm. Our analysis establishes quantitative convergence guarantees for kernels of minimal smoothness. We prove that for a symmetric posi…
View article: A Hidden Variable Resultant Method for the Polynomial Multiparameter Eigenvalue Problem
A Hidden Variable Resultant Method for the Polynomial Multiparameter Eigenvalue Problem Open
We present a novel, global algorithm for solving polynomial multiparameter eigenvalue problems (PMEPs) by leveraging a hidden variable tensor Dixon resultant framework. Our method transforms a PMEP into one or more univariate polynomial ei…
View article: Extending Mercer's expansion to indefinite and asymmetric kernels
Extending Mercer's expansion to indefinite and asymmetric kernels Open
Mercer's expansion and Mercer's theorem are cornerstone results in kernel theory. While the classical Mercer's theorem only considers continuous symmetric positive definite kernels, analogous expansions are effective in practice for indefi…
View article: Numerical Instability of Algebraic Rootfinding Methods
Numerical Instability of Algebraic Rootfinding Methods Open
We demonstrate that the most popular variants of all common algebraic multidimensional rootfinding algorithms are unstable by analyzing the conditioning of subproblems that are constructed at intermediate steps. In particular, we give mult…
View article: A GENERALIZATION OF THE RANDOMIZED SINGULAR VALUE DECOMPOSITION
A GENERALIZATION OF THE RANDOMIZED SINGULAR VALUE DECOMPOSITION Open
The randomized singular value decomposition (SVD) is a popular and effective algorithm for computing a near-best rank k approximation of a matrix A using matrix-vector products with standard Gaussian vectors. Here, we generalize the random…
View article: How to reveal the rank of a matrix?
How to reveal the rank of a matrix? Open
We study algorithms called rank-revealers that reveal a matrix's rank structure. Such algorithms form a fundamental component in matrix compression, singular value estimation, and column subset selection problems. While column-pivoted QR h…
View article: Leveraging the Hankel norm approximation and data‐driven algorithms in reduced order modeling
Leveraging the Hankel norm approximation and data‐driven algorithms in reduced order modeling Open
Summary Large‐scale linear time‐invariant (LTI) dynamical systems are widely used to characterize complicated physical phenomena. Glover developed the Hankel norm approximation (HNA) algorithm for optimally reducing the system in the Hanke…
View article: Operator learning without the adjoint
Operator learning without the adjoint Open
There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using …
View article: Operator learning for hyperbolic partial differential equations
Operator learning for hyperbolic partial differential equations Open
We construct the first rigorously justified probabilistic algorithm for recovering the solution operator of a hyperbolic partial differential equation (PDE) in two variables from input-output training pairs. The primary challenge of recove…
View article: Beyond expectations: residual dynamic mode decomposition and variance for stochastic dynamical systems
Beyond expectations: residual dynamic mode decomposition and variance for stochastic dynamical systems Open
Koopman operators linearize nonlinear dynamical systems, making their spectral information of crucial interest. Numerous algorithms have been developed to approximate these spectral properties, and dynamic mode decomposition (DMD) stands o…
View article: A Mathematical Guide to Operator Learning
A Mathematical Guide to Operator Learning Open
Operator learning aims to discover properties of an underlying dynamical system or partial differential equation (PDE) from data. Here, we present a step-by-step guide to operator learning. We explain the types of problems and PDEs amenabl…
View article: ContHutch++: Stochastic trace estimation for implicit integral operators
ContHutch++: Stochastic trace estimation for implicit integral operators Open
Hutchinson's estimator is a randomized algorithm that computes an $ε$-approximation to the trace of any positive semidefinite matrix using $\mathcal{O}(1/ε^2)$ matrix-vector products. An improvement of Hutchinson's estimator, known as Hutc…
View article: Elliptic PDE learning is provably data-efficient
Elliptic PDE learning is provably data-efficient Open
Partial differential equations (PDE) learning is an emerging field that combines physics and machine learning to recover unknown physical systems from experimental data. While deep learning models traditionally require copious amounts of t…
View article: Beyond expectations: Residual Dynamic Mode Decomposition and Variance for Stochastic Dynamical Systems
Beyond expectations: Residual Dynamic Mode Decomposition and Variance for Stochastic Dynamical Systems Open
Koopman operators linearize nonlinear dynamical systems, making their spectral information of crucial interest. Numerous algorithms have been developed to approximate these spectral properties, and Dynamic Mode Decomposition (DMD) stands o…
View article: Rigorous data‐driven computation of spectral properties of Koopman operators for dynamical systems
Rigorous data‐driven computation of spectral properties of Koopman operators for dynamical systems Open
Koopman operators are infinite‐dimensional operators that globally linearize nonlinear dynamical systems, making their spectral information valuable for understanding dynamics. However, Koopman operators can have continuous spectra and inf…
View article: Parallel Algorithms for Computing the Tensor-Train Decomposition
Parallel Algorithms for Computing the Tensor-Train Decomposition Open
The tensor-train (TT) decomposition expresses a tensor in a data-sparse format used in molecular simulations, high-order correlation functions, and optimization. In this paper, we propose four parallelizable algorithms that compute the TT …
View article: Avoiding discretization issues for nonlinear eigenvalue problems
Avoiding discretization issues for nonlinear eigenvalue problems Open
The first step when solving an infinite-dimensional eigenvalue problem is often to discretize it. We show that one must be extremely careful when discretizing nonlinear eigenvalue problems. Using examples, we show that discretization can: …
View article: Leveraging the Hankel norm approximation and block-AAA algorithms in reduced order modeling
Leveraging the Hankel norm approximation and block-AAA algorithms in reduced order modeling Open
Large-scale linear, time-invariant (LTI) dynamical systems are widely used to characterize complicated physical phenomena. We propose a two-stage algorithm to reduce the order of a large-scale LTI system given samples of its transfer funct…
View article: Elliptic PDE learning is provably data-efficient
Elliptic PDE learning is provably data-efficient Open
PDE learning is an emerging field that combines physics and machine learning to recover unknown physical systems from experimental data. While deep learning models traditionally require copious amounts of training data, recent PDE learning…
View article: Are sketch-and-precondition least squares solvers numerically stable?
Are sketch-and-precondition least squares solvers numerically stable? Open
Sketch-and-precondition techniques are efficient and popular for solving large least squares (LS) problems of the form $Ax=b$ with $A\in\mathbb{R}^{m\times n}$ and $m\gg n$. This is where $A$ is ``sketched" to a smaller matrix $SA$ with $S…
View article: Structured matrix recovery from matrix-vector products
Structured matrix recovery from matrix-vector products Open
Can one recover a matrix efficiently from only matrix-vector products? If so, how many are needed? This paper describes algorithms to recover matrices with known structures, such as tridiagonal, Toeplitz, Toeplitz-like, and hierarchical lo…
View article: Expander graphs are globally synchronizing
Expander graphs are globally synchronizing Open
The Kuramoto model is fundamental to the study of synchronization. It consists of a collection of oscillators with interactions given by a network, which we identify respectively with vertices and edges of a graph. In this paper, we show t…
View article: Probabilistic Missing Value Imputation for Mixed Categorical and Ordered Data
Probabilistic Missing Value Imputation for Mixed Categorical and Ordered Data Open
Many real-world datasets contain missing entries and mixed data types including categorical and ordered (e.g. continuous and ordinal) variables. Imputing the missing entries is necessary, since many data analysis pipelines require complete…
View article: Exploring the electric field around a loop of static charge: Rectangles, stadiums, ellipses, and knots
Exploring the electric field around a loop of static charge: Rectangles, stadiums, ellipses, and knots Open
We study the electric field around a continuous one-dimensional loop of static charge, under the assumption that the charge is distributed uniformly along the loop. For rectangular or stadium-shaped loops in the plane, we find that the ele…
View article: Tuning Frequency Bias in Neural Network Training with Nonuniform Data
Tuning Frequency Bias in Neural Network Training with Nonuniform Data Open
Small generalization errors of over-parameterized neural networks (NNs) can be partially explained by the frequency biasing phenomenon, where gradient-based algorithms minimize the low-frequency misfit before reducing the high-frequency re…
View article: Learning Green's functions associated with time-dependent partial differential equations
Learning Green's functions associated with time-dependent partial differential equations Open
Neural operators are a popular technique in scientific machine learning to learn a mathematical model of the behavior of unknown physical systems from data. Neural operators are especially useful to learn solution operators associated with…
View article: Exploring the electric field around a loop of static charge: Rectangles, stadiums, ellipses, and knots
Exploring the electric field around a loop of static charge: Rectangles, stadiums, ellipses, and knots Open
We study the electric field around a continuous one-dimensional loop of static charge, under the assumption that the charge is distributed uniformly along the loop. For rectangular or stadium-shaped loops in the plane, we find that the ele…
View article: A global synchronization theorem for oscillators on a random graph
A global synchronization theorem for oscillators on a random graph Open
Consider $n$ identical Kuramoto oscillators on a random graph. Specifically, consider \ER random graphs in which any two oscillators are bidirectionally coupled with unit strength, independently and at random, with probability $0\leq p\leq…