James Requeima
YOU?
Author Swipe
Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning Open
In scientific domains -- from biology to the social sciences -- many questions boil down to \textit{What effect will we observe if we intervene on a particular variable?} If the causal relationships (e.g.~a causal graph) are known, it is p…
View article: JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs
JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs Open
We introduce a simple method for probabilistic predictions on tabular data based on Large Language Models (LLMs) called JoLT (Joint LLM Process for Tabular data). JoLT uses the in-context learning capabilities of LLMs to define joint distr…
A Meta-Learning Approach to Bayesian Causal Discovery Open
Discovering a unique causal structure is difficult due to both inherent identifiability issues, and the consequences of finite data. As such, uncertainty over causal structures, such as those obtained from a Bayesian posterior, are often n…
View article: Context is Key: A Benchmark for Forecasting with Essential Textual Information
Context is Key: A Benchmark for Forecasting with Essential Textual Information Open
Forecasting is a critical task in decision-making across numerous domains. While historical numerical data provide a start, they fail to convey the complete context for reliable and accurate predictions. Human forecasters frequently rely o…
AI for operational methane emitter monitoring from space Open
Mitigating methane emissions is the fastest way to stop global warming in the short-term and buy humanity time to decarbonise. Despite the demonstrated ability of remote sensing instruments to detect methane plumes, no system has been avai…
View article: End-to-end data-driven weather forecasting. (source code, sample data and trained models)
End-to-end data-driven weather forecasting. (source code, sample data and trained models) Open
This resource contains the key source code, some sample data and the trained models from the paper: "End-to-end data-driven weather forecasting."
View article: Translation Equivariant Transformer Neural Processes
Translation Equivariant Transformer Neural Processes Open
The effectiveness of neural processes (NPs) in modelling posterior prediction maps -- the mapping from data to posterior predictive distributions -- has significantly improved since their inception. This improvement can be attributed to tw…
View article: LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language Open
Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise neede…
Diffusion-Augmented Neural Processes Open
Over the last few years, Neural Processes have become a useful modelling tool in many application areas, such as healthcare and climate sciences, in which data are scarce and prediction uncertainty estimates are indispensable. However, the…
Sim2Real for Environmental Neural Processes Open
Machine learning (ML)-based weather models have recently undergone rapid improvements. These models are typically trained on gridded reanalysis data from numerical data assimilation systems. However, reanalysis data comes with limitations,…
View article: Environmental sensor placement with convolutional Gaussian neural processes
Environmental sensor placement with convolutional Gaussian neural processes Open
Environmental sensors are crucial for monitoring weather conditions and the impacts of climate change. However, it is challenging to place sensors in a way that maximises the informativeness of their measurements, particularly in remote re…
View article: Environmental Sensor Placement with Convolutional Gaussian Neural Processes
Environmental Sensor Placement with Convolutional Gaussian Neural Processes Open
Environmental sensors are crucial for monitoring weather conditions and the impacts of climate change. However, it is challenging to place sensors in a way that maximises the informativeness of their measurements, particularly in remote re…
Challenges and Pitfalls of Bayesian UnLearning Open
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model. Approximate unlearning are one class of methods for this task which avoid the need to retrain the model fro…
Challenges and Pitfalls of Bayesian Unlearning Open
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model. Approximate unlearning are one class of methods for this task which avoid the need to retrain the model fro…
View article: Practical Conditional Neural Processes Via Tractable Dependent Predictions
Practical Conditional Neural Processes Via Tractable Dependent Predictions Open
Conditional Neural Processes (CNPs; Garnelo et al., 2018a) are meta-learning models which leverage the flexibility of deep learning to produce well-calibrated predictions and naturally handle off-the-grid and missing data. CNPs scale to la…
View article: Practical Conditional Neural Processes Via Tractable Dependent Predictions
Practical Conditional Neural Processes Via Tractable Dependent Predictions Open
Conditional Neural Processes (CNPs; Garnelo et al., 2018a) are meta-learning models which leverage the flexibility of deep learning to produce well-calibrated predictions and naturally handle off-the-grid and missing data. CNPs scale to la…
View article: Efficient Gaussian Neural Processes for Regression
Efficient Gaussian Neural Processes for Regression Open
Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood proc…
The Gaussian Neural Process Open
Neural Processes (NPs; Garnelo et al., 2018a,b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes. We provide a rigorous analysis of the standard maximum-likelihood objective used t…
View article: Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes
Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes Open
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data. They enable the statistical symmetry of underlying physical phenomena to be leveraged, thereby aid…
TaskNorm: Rethinking Batch Normalization for Meta-Learning Open
Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines. However, the hierarchical na…
TaskNorm: Rethinking Batch Normalization for Meta-Learning Open
Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines. However, the hierarchical na…
Fast and Flexible Multi-Task Classification Using Conditional Neural\n Adaptive Processes Open
The goal of this paper is to design image classification systems that, after\nan initial multi-task training phase, can automatically adapt to new tasks\nencountered at test time. We introduce a conditional neural process based\napproach t…
View article: Convolutional Conditional Neural Processes
Convolutional Conditional Neural Processes Open
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning prob…
Fast and Flexible Multi-Task Classification Using Conditional Neural\n Adaptive Processes Open
The goal of this paper is to design image classification systems that, after\nan initial multi-task training phase, can automatically adapt to new tasks\nencountered at test time. We introduce a conditional neural process based\napproach t…
Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes Open
The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to t…
The Gaussian Process Autoregressive Regression Model (GPAR) Open
Multi-output regression models must exploit dependencies between outputs to maximise predictive performance. The application of Gaussian processes (GPs) to this setting typically yields models that are computationally demanding and have li…
View article: The Gaussian Process Autoregressive Regression Model (GPAR)
The Gaussian Process Autoregressive Regression Model (GPAR) Open
Multi-output regression models must exploit dependencies between outputs to maximise predictive performance. The application of Gaussian processes (GPs) to this setting typically yields models that are computationally demanding and have li…
Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space Open
Chemical space is so large that brute force searches for new interesting molecules are infeasible. High-throughput virtual screening via computer cluster simulations can speed up the discovery process by collecting very large amounts of da…
Parallel and Distributed Thompson Sampling for Large-scale Accelerated\n Exploration of Chemical Space Open
Chemical space is so large that brute force searches for new interesting\nmolecules are infeasible. High-throughput virtual screening via computer\ncluster simulations can speed up the discovery process by collecting very large\namounts of…