Ricky T. Q. Chen
YOU?
Author Swipe
View article: Enhancing Diffusion-Based Sampling with Molecular Collective Variables
Enhancing Diffusion-Based Sampling with Molecular Collective Variables Open
Diffusion-based samplers learn to sample complex, high-dimensional distributions using energies or log densities alone, without training data. Yet, they remain impractical for molecular sampling because they are often slower than molecular…
View article: Edit Flows: Flow Matching with Edit Operations
Edit Flows: Flow Matching with Edit Operations Open
Autoregressive generative models naturally generate variable-length sequences, while non-autoregressive models struggle, often imposing rigid, token-wise structures. We propose Edit Flows, a non-autoregressive model that overcomes these li…
View article: FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions
FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions Open
Material discovery is a critical area of research with the potential to revolutionize various fields, including carbon capture, renewable energy, and electronics. However, the immense scale of the chemical space makes it challenging to exp…
View article: Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control
Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control Open
Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there have not been many theoretically-sound methods for improving these mod…
View article: Leveraging normalizing flows for orbital-free density functional theory
Leveraging normalizing flows for orbital-free density functional theory Open
Orbital-free density functional theory (OF-DFT) for real-space systems has historically depended on Lagrange optimization techniques, primarily due to the inability of previously proposed electron density approaches to ensure the normaliza…
View article: Discrete Flow Matching
Discrete Flow Matching Open
Despite Flow Matching and diffusion models having emerged as powerful generative paradigms for continuous variables such as images and videos, their application to high-dimensional discrete data, such as language, is still limited. In this…
View article: FlowMM: Generating Materials with Riemannian Flow Matching
FlowMM: Generating Materials with Riemannian Flow Matching Open
Crystalline materials are a fundamental component in next-generation technologies, yet modeling their distribution presents unique computational challenges. Of the plausible arrangements of atoms in a periodic lattice only a vanishingly sm…
View article: Variational Schrödinger Diffusion Models
Variational Schrödinger Diffusion Models Open
Schrödinger bridge (SB) has emerged as the go-to method for optimizing transportation plans in diffusion models. However, SB requires estimating the intractable forward score functions, inevitably resulting in the costly implicit training …
View article: Reflected Schrödinger Bridge for Constrained Generative Modeling
Reflected Schrödinger Bridge for Constrained Generative Modeling Open
Diffusion models have become the go-to method for large-scale generative models in real-world applications. These applications often involve data distributions confined within bounded domains, typically requiring ad-hoc thresholding techni…
View article: TaskMet: Task-Driven Metric Learning for Model Learning
TaskMet: Task-Driven Metric Learning for Model Learning Open
Deep learning models are often deployed in downstream tasks that the training procedure may not be aware of. For example, models solely trained to achieve accurate predictions may struggle to perform well on downstream tasks because seemin…
View article: Orbital-Free Density Functional Theory with Continuous Normalizing Flows
Orbital-Free Density Functional Theory with Continuous Normalizing Flows Open
Orbital-free density functional theory (OF-DFT) provides an alternative approach for calculating the molecular electronic energy, relying solely on the electron density. In OF-DFT, both the ground-state density is optimized variationally t…
View article: Flow Matching on General Geometries
Flow Matching on General Geometries Open
We propose Riemannian Flow Matching (RFM), a simple yet powerful framework for training continuous normalizing flows on manifolds. Existing methods for generative modeling on manifolds either require expensive simulation, are inherently un…
View article: Neural Conservation Laws: A Divergence-Free Perspective
Neural Conservation Laws: A Divergence-Free Perspective Open
We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that any solution of the continuity equation can be represented …
View article: Matching Normalizing Flows and Probability Paths on Manifolds
Matching Normalizing Flows and Probability Paths on Manifolds Open
Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probab…
View article: Semi-Discrete Normalizing Flows through Differentiable Tessellation
Semi-Discrete Normalizing Flows through Differentiable Tessellation Open
Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches. We propose a tessellation-based approach that directly learns quantization boundaries in a continuous space, c…
View article: Fully differentiable optimization protocols for non-equilibrium steady states
Fully differentiable optimization protocols for non-equilibrium steady states Open
In the case of quantum systems interacting with multiple environments, the time-evolution of the reduced density matrix is described by the Liouvillian. For a variety of physical observables, the long-time limit or steady state (SS) soluti…
View article: "Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code
"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code Open
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step i…
View article: Fully differentiable optimization protocols for non-equilibrium steady\n states
Fully differentiable optimization protocols for non-equilibrium steady\n states Open
In the case of quantum systems interacting with multiple environments, the\ntime-evolution of the reduced density matrix is described by the Liouvillian.\nFor a variety of physical observables, the long-time limit or steady state\nsolution…
View article: Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations
Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations Open
We perform scalable approximate inference in continuous-depth Bayesian neural networks. In this model class, uncertainty about separate weights in each layer gives hidden units that follow a stochastic differential equation. We demonstrate…
View article: Convex Potential Flows: Universal Probability Distributions with Optimal\n Transport and Convex Optimization
Convex Potential Flows: Universal Probability Distributions with Optimal\n Transport and Convex Optimization Open
Flow-based models are powerful tools for designing probabilistic models with\ntractable density. This paper introduces Convex Potential Flows (CP-Flow), a\nnatural and efficient parameterization of invertible models inspired by the\noptima…
View article: Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization
Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization Open
Flow-based models are powerful tools for designing probabilistic models with tractable density. This paper introduces Convex Potential Flows (CP-Flow), a natural and efficient parameterization of invertible models inspired by the optimal t…
View article: Inverse design of dissipative quantum steady-states with implicit differentiation
Inverse design of dissipative quantum steady-states with implicit differentiation Open
Inverse design of a property that depends on the steady-state of an open quantum system is commonly done by grid-search type of methods. In this paper we present a new methodology that allows us to compute the gradient of the steady-state …
View article: Neural Spatio-Temporal Point Processes
Neural Spatio-Temporal Point Processes Open
We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and spa…
View article: Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering Open
Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparamet…
View article: Learning Neural Event Functions for Ordinary Differential Equations
Learning Neural Event Functions for Ordinary Differential Equations Open
The existing Neural ODE formulation relies on an explicit knowledge of the termination time. We extend Neural ODEs to implicitly defined termination criteria modeled by neural event functions, which can be chained together and differentiat…
View article: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms
"Hey, that's not an ODE": Faster ODE Adjoints via Seminorms Open
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step i…
View article: SUMO: Unbiased Estimation of Log Marginal Probability for Latent\n Variable Models
SUMO: Unbiased Estimation of Log Marginal Probability for Latent\n Variable Models Open
Standard variational lower bounds used to train latent variable models\nproduce biased estimates of most quantities of interest. We introduce an\nunbiased estimator of the log marginal likelihood and its gradients for latent\nvariable mode…
View article: SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models Open
Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models …
View article: Scalable Gradients for Stochastic Differential Equations
Scalable Gradients for Stochastic Differential Equations Open
The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradient…
View article: Neural Networks with Cheap Differential Operators
Neural Networks with Cheap Differential Operators Open
Gradients of neural networks can be computed efficiently for any architecture, but some applications require differential operators with higher time complexity. We describe a family of restricted neural network architectures that allow eff…