Patrick Kidger
YOU?
Author Swipe
View article: AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs?
AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs? Open
Despite progress in language model (LM) capabilities, evaluations have thus far focused on models' performance on tasks that humans have previously solved, including in programming (Jimenez et al., 2024) and mathematics (Glazer et al., 202…
View article: JAX‐CanVeg: A Differentiable Land Surface Model
JAX‐CanVeg: A Differentiable Land Surface Model Open
Land surface models consider the exchange of water, energy, and carbon along the soil‐canopy‐atmosphere continuum, which is challenging to model due to their complex interdependency and associated challenges in representing and parameteriz…
View article: Learning Constitutive Relations From Soil Moisture Data via Physically Constrained Neural Networks
Learning Constitutive Relations From Soil Moisture Data via Physically Constrained Neural Networks Open
The constitutive relations of the Richardson‐Richards equation encode the macroscopic properties of soil water retention and conductivity. These soil hydraulic functions are commonly represented by models with a handful of parameters. The …
View article: Single-seed generation of Brownian paths and integrals for adaptive and high order SDE solvers
Single-seed generation of Brownian paths and integrals for adaptive and high order SDE solvers Open
Despite the success of adaptive time-stepping in ODE simulation, it has so far seen few applications for Stochastic Differential Equations (SDEs). To simulate SDEs adaptively, methods such as the Virtual Brownian Tree (VBT) have been devel…
View article: Optimistix: modular optimisation in JAX and Equinox
Optimistix: modular optimisation in JAX and Equinox Open
We introduce Optimistix: a nonlinear optimisation library built in JAX and Equinox. Optimistix introduces a novel, modular approach for its minimisers and least-squares solvers. This modularity relies on new practical abstractions for opti…
View article: Lineax: unified linear solves and linear least-squares in JAX and Equinox
Lineax: unified linear solves and linear least-squares in JAX and Equinox Open
We introduce Lineax, a library bringing linear solves and linear least-squares to the JAX+Equinox scientific computing ecosystem. Lineax uses general linear operators, and unifies linear solves and least-squares into a single, autodifferen…
View article: On Neural Differential Equations
On Neural Differential Equations Open
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Trad…
View article: Equinox: neural networks in JAX via callable PyTrees and filtered transformations
Equinox: neural networks in JAX via callable PyTrees and filtered transformations Open
JAX and PyTorch are two popular Python autodifferentiation frameworks. JAX is based around pure functions and functional programming. PyTorch has popularised the use of an object-oriented (OO) class-based syntax for defining parameterised …
View article: Neural Rough Differential Equations for Long Time Series
Neural Rough Differential Equations for Long Time Series Open
Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irre…
View article: Neural Controlled Differential Equations for Online Prediction Tasks
Neural Controlled Differential Equations for Online Prediction Tasks Open
Neural controlled differential equations (Neural CDEs) are a continuous-time extension of recurrent neural networks (RNNs), achieving state-of-the-art (SOTA) performance at modelling functions of irregular time series. In order to interpre…
View article: Efficient and Accurate Gradients for Neural SDEs
Efficient and Accurate Gradients for Neural SDEs Open
Neural SDEs combine many of the best qualities of both RNNs and SDEs: memory efficient training, high-capacity function approximation, and strong priors on model space. This makes them a natural choice for modelling many types of temporal …
View article: Neural CDEs for Long Time Series via the Log-ODE Method
Neural CDEs for Long Time Series via the Log-ODE Method Open
Neural Controlled Differential Equations (Neural CDEs) are the continuous-time analogue of an RNN, just as Neural ODEs are analogous to ResNets. However just like RNNs, training Neural CDEs can be difficult for long time series. Here, we p…
View article: "Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code
"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code Open
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step i…
View article: Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU
Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU Open
Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms. The focus is on machine learning, and as such includes features such as CPU parallelism, GPU support, and backpropaga…
View article: Neural SDEs as Infinite-Dimensional GANs
Neural SDEs as Infinite-Dimensional GANs Open
Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics. However, a fundamental limitation has been that such models have typically been relatively inflexible, which recent work introducing Neur…
View article: Neural SDEs as infinite-dimensional GANs
Neural SDEs as infinite-dimensional GANs Open
Stochastic differential equations (SDEs) are a staple of mathematical modelling of temporal dynamics. However, a fundamental limitation has been that such models have typically been relatively inflexible, which recent work introducing Neur…
View article: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms
"Hey, that's not an ODE": Faster ODE Adjoints via Seminorms Open
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step i…
View article: Neural Rough Differential Equations for Long Time Series
Neural Rough Differential Equations for Long Time Series Open
Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irre…
View article: A Generalised Signature Method for Multivariate Time Series Feature Extraction
A Generalised Signature Method for Multivariate Time Series Feature Extraction Open
The 'signature method' refers to a collection of feature extraction techniques for multivariate time series, derived from the theory of controlled differential equations. There is a great deal of flexibility as to how this method can be ap…
View article: A Generalised Signature Method for Multivariate Time Series Feature\n Extraction
A Generalised Signature Method for Multivariate Time Series Feature\n Extraction Open
The 'signature method' refers to a collection of feature extraction\ntechniques for multivariate time series, derived from the theory of controlled\ndifferential equations. There is a great deal of flexibility as to how this\nmethod can be…
View article: Generalised Interpretable Shapelets for Irregular Time Series
Generalised Interpretable Shapelets for Irregular Time Series Open
The shapelet transform is a form of feature extraction for time series, in which a time series is described by its similarity to each of a collection of `shapelets'. However it has previously suffered from a number of limitations, such as …
View article: Neural Controlled Differential Equations for Irregular Time Series
Neural Controlled Differential Equations for Irregular Time Series Open
Neural ordinary differential equations are an attractive option for modelling temporal dynamics. However, a fundamental issue is that the solution to an ordinary differential equation is determined by its initial condition, and there is no…
View article: Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU
Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU Open
Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms. The focus is on machine learning, and as such includes features such as CPU parallelism, GPU support, and backpropaga…
View article: The degree-$(n+1)$ polynomials are the most difficult $C^{\,n + 1}$ functions to uniformly approximate with degree-$n$ polynomials
The degree-$(n+1)$ polynomials are the most difficult $C^{\,n + 1}$ functions to uniformly approximate with degree-$n$ polynomials Open
There exist well-known tight bounds on the error between a function $f \in C^{\,n + 1}([-1, 1])$ and its best polynomial approximation of degree $n$. We show that the error meets these bounds when and only when $f$ is a polynomial of degre…
View article: Deep Signature Transforms
Deep Signature Transforms Open
The signature is an infinite graded sequence of statistics known to characterise a stream of data up to a negligible equivalence class. It is a transform which has previously been treated as a fixed feature transformation, on top of which …
View article: Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks Open
The classical Universal Approximation Theorem holds for neural networks of arbitrary width and bounded depth. Here we consider the natural `dual' scenario for networks of bounded width and arbitrary depth. Precisely, let $n$ be the number …
View article: Deep Signatures
Deep Signatures Open
The signature is an infinite graded sequence of statistics known to characterise a stream of data up to a negligible equivalence class. It is a transform which has previously been treated as a fixed feature transformation, on top of which …
View article: Deep Signature Transforms
Deep Signature Transforms Open
The signature is an infinite graded sequence of statistics known to characterise a stream of data up to a negligible equivalence class. It is a transform which has previously been treated as a fixed feature transformation, on top of which …