Jared Tanner
YOU?
Author Swipe
View article: Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Attention Layers
Mind the Gap: a Spectral Analysis of Rank Collapse and Signal Propagation in Attention Layers Open
Attention layers are the core component of transformers, the current state-of-the-art neural network architecture. Alternatives to softmax-based attention are being explored due to its tendency to hinder effective information flow. Even at…
View article: Deep Neural Network Initialization with Sparsity Inducing Activations
Deep Neural Network Initialization with Sparsity Inducing Activations Open
Inducing and leveraging sparse activations during training and inference is a promising avenue for improving the computational efficiency of deep networks, which is increasingly important as network sizes continue to grow and their applica…
View article: Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes
Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes Open
The infinitely wide neural network has been proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian p…
View article: Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs Open
The ever-increasing large language models (LLMs), though opening a potential path for the upcoming artificial general intelligence, sadly drops a daunting obstacle on the way towards their on-device deployment. As one of the most well-esta…
View article: Vulnerable Dispositional Traits And Poverty Are Associated With Older Brain Age In Adults With Knee Pain
Vulnerable Dispositional Traits And Poverty Are Associated With Older Brain Age In Adults With Knee Pain Open
View article: On the Initialisation of Wide Low-Rank Feedforward Neural Networks
On the Initialisation of Wide Low-Rank Feedforward Neural Networks Open
The edge-of-chaos dynamics of wide randomly initialized low-rank feedforward networks are analyzed. Formulae for the optimal weight and bias variances are extended from the full-rank to low-rank setting and are shown to follow from multipl…
View article: Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks
Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks Open
We investigate properties of neural networks that use both ReLU and $x^2$ as activation functions and build upon previous results to show that both analytic functions and functions in Sobolev spaces can be approximated by such networks of …
View article: Improved Projection Learning for Lower Dimensional Feature Maps
Improved Projection Learning for Lower Dimensional Feature Maps Open
The requirement to repeatedly move large feature maps off- and on-chip during inference with convolutional neural networks (CNNs) imposes high costs in terms of both energy and time. In this work we explore an improved method for compressi…
View article: Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP)
Tuning-free multi-coil compressed sensing MRI with Parallel Variable Density Approximate Message Passing (P-VDAMP) Open
Magnetic Resonance Imaging (MRI) has excellent soft tissue contrast but is hindered by an inherently slow data acquisition process. Compressed sensing, which reconstructs sparse signals from incoherently sampled data, has been widely appli…
View article: Activation function design for deep networks: linearity and effective initialisation
Activation function design for deep networks: linearity and effective initialisation Open
View article: Trajectory growth lower bounds for random sparse deep ReLU networks
Trajectory growth lower bounds for random sparse deep ReLU networks Open
This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing resu…
View article: Mutual Information of Neural Network Initialisations: Mean Field Approximations
Mutual Information of Neural Network Initialisations: Mean Field Approximations Open
The ability to train randomly initialised deep neural networks is known to depend strongly on the variance of the weight matrices and biases as well as the choice of nonlinear activation. Here we complement the existing geometric analysis …
View article: An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks
An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks Open
View article: Activation function design for deep networks: linearity and effective\n initialisation
Activation function design for deep networks: linearity and effective\n initialisation Open
The activation function deployed in a deep neural network has great influence\non the performance of the network at initialisation, which in turn has\nimplications for training. In this paper we study how to avoid two problems at\ninitiali…
View article: Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset Open
That neural networks may be pruned to high sparsities and retain high accuracy is well established. Recent research efforts focus on pruning immediately after initialization so as to allow the computational savings afforded by sparsity to …
View article: Dense for the Price of Sparse: Improved Performance of Sparsely\n Initialized Networks via a Subspace Offset
Dense for the Price of Sparse: Improved Performance of Sparsely\n Initialized Networks via a Subspace Offset Open
That neural networks may be pruned to high sparsities and retain high\naccuracy is well established. Recent research efforts focus on pruning\nimmediately after initialization so as to allow the computational savings\nafforded by sparsity …
View article: Mutual Information of Neural Network Initialisations: Mean Field\n Approximations
Mutual Information of Neural Network Initialisations: Mean Field\n Approximations Open
The ability to train randomly initialised deep neural networks is known to\ndepend strongly on the variance of the weight matrices and biases as well as\nthe choice of nonlinear activation. Here we complement the existing geometric\nanalys…
View article: An Empirical Study of Derivative-Free-Optimization Algorithms for\n Targeted Black-Box Attacks in Deep Neural Networks
An Empirical Study of Derivative-Free-Optimization Algorithms for\n Targeted Black-Box Attacks in Deep Neural Networks Open
We perform a comprehensive study on the performance of derivative free\noptimization (DFO) algorithms for the generation of targeted black-box\nadversarial attacks on Deep Neural Network (DNN) classifiers assuming the\nperturbation energy …
View article: Pain Severity and Interference in Different Parkinson’s Disease Cognitive Phenotypes
Pain Severity and Interference in Different Parkinson’s Disease Cognitive Phenotypes Open
Yenisel Cruz-Almeida,1 Samuel J Crowley,2 Jared Tanner,2 Catherine C Price2,3 1Pain Research & Intervention Center of Excellence, University of Florida, Gainesville, FL, USA; 2Department of Clinical and Health Psychology, University of Flo…
View article: An Approximate Message Passing Algorithm For Rapid Parameter-Free Compressed Sensing MRI
An Approximate Message Passing Algorithm For Rapid Parameter-Free Compressed Sensing MRI Open
For certain sensing matrices, the Approximate Message Passing (AMP) algorithm efficiently reconstructs undersampled signals. However, in Magnetic Resonance Imaging (MRI), where Fourier coefficients of a natural image are sampled with varia…
View article: Geometric anomaly detection in data
Geometric anomaly detection in data Open
Significance The problem of fitting low-dimensional manifolds to high-dimensional data has been extensively studied from both theoretical and computational perspectives. As datasets get more heterogeneous and complicated, so must the space…
View article: Compressed sensing of low-rank plus sparse matrices
Compressed sensing of low-rank plus sparse matrices Open
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data popularized as Robust PCA (Candes et al., 2011; Chandrasekaran et al., 2009). Compressed sensing, matr…
View article: Simulating the outer layers of rapidly rotating stars
Simulating the outer layers of rapidly rotating stars Open
This paper presents the results of a set of radiative hydrodynamic simulations of convection in the near-surface regions of a rapidly rotating star. The simulations use microphysics consistent with stellar models, and include the effects o…
View article: The Permuted Striped Block Model and its Factorization - Algorithms with Recovery Guarantees.
The Permuted Striped Block Model and its Factorization - Algorithms with Recovery Guarantees. Open
We introduce a novel class of matrices which are defined by the factorization $\textbf{Y} :=\textbf{A}\textbf{X}$, where $\textbf{A}$ is an $m \times n$ wide sparse binary matrix with a fixed number $d$ nonzeros per column and $\textbf{X}$…
View article: Encoder blind combinatorial compressed sensing
Encoder blind combinatorial compressed sensing Open
In its most elementary form, compressed sensing studies the design of decoding algorithms to recover a sufficiently sparse vector or code from a lower dimensional linear measurement vector. Typically it is assumed that the decoder has acce…
View article: A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA Open
We demonstrate that model-based derivative free optimisation algorithms can generate adversarial targeted misclassification of deep networks using fewer network queries than non-model-based methods. Specifically, we consider the black-box …
View article: Approximate message passing with a colored aliasing model for variable density Fourier sampled images
Approximate message passing with a colored aliasing model for variable density Fourier sampled images Open
The Approximate Message Passing (AMP) algorithm eciently reconstructs signals which have been sampled with large i.i.d. sub-Gaussian sensing matrices. However, when Fourier coecients of a signal with non-uniform spectral density are sample…
View article: Approximate Message Passing With a Colored Aliasing Model for Variable Density Fourier Sampled Images
Approximate Message Passing With a Colored Aliasing Model for Variable Density Fourier Sampled Images Open
The Approximate Message Passing (AMP) algorithm efficiently reconstructs signals which have been sampled with large i.i.d. sub-Gaussian sensing matrices. Central to AMP is its "state evolution", which guarantees that the difference between…
View article: An Approximate Message Passing Algorithm for Rapid Parameter-Free\n Compressed Sensing MRI
An Approximate Message Passing Algorithm for Rapid Parameter-Free\n Compressed Sensing MRI Open
For certain sensing matrices, the Approximate Message Passing (AMP) algorithm\nefficiently reconstructs undersampled signals. However, in Magnetic Resonance\nImaging (MRI), where Fourier coefficients of a natural image are sampled with\nva…
View article: Sparse non-negative super-resolution — simplified and stabilised
Sparse non-negative super-resolution — simplified and stabilised Open