Andrew Lizarraga
YOU?
Author Swipe
View article: Modeling ring current proton distribution using MLP, CNN, LSTM, and transformer networks
Modeling ring current proton distribution using MLP, CNN, LSTM, and transformer networks Open
This study aims at developing ring current proton flux models using four neural network architectures: a multilayer perceptron (MLP), a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a Transformer network.…
View article: Latent Adaptive Planner for Dynamic Manipulation
Latent Adaptive Planner for Dynamic Manipulation Open
We present the Latent Adaptive Planner (LAP), a trajectory-level latent-variable policy for dynamic nonprehensile manipulation (e.g., box catching) that formulates planning as inference in a low-dimensional latent space and is learned effe…
View article: Understanding Galaxy Morphology Evolution Through Cosmic Time via Redshift Conditioned Diffusion Models
Understanding Galaxy Morphology Evolution Through Cosmic Time via Redshift Conditioned Diffusion Models Open
Redshift measures the distance to galaxies and underlies our understanding of the origin of the Universe and galaxy evolution. Spectroscopic redshift is the gold-standard method for measuring redshift, but it requires about $1000$ times mo…
View article: Unlocking the Potential of Text-to-Image Diffusion with PAC-Bayesian Theory
Unlocking the Potential of Text-to-Image Diffusion with PAC-Bayesian Theory Open
Text-to-image (T2I) diffusion models have revolutionized generative modeling by producing high-fidelity, diverse, and visually realistic images from textual prompts. Despite these advances, existing models struggle with complex prompts inv…
View article: DODT: Enhanced Online Decision Transformer Learning through Dreamer's Actor-Critic Trajectory Forecasting
DODT: Enhanced Online Decision Transformer Learning through Dreamer's Actor-Critic Trajectory Forecasting Open
Advancements in reinforcement learning have led to the development of sophisticated models capable of learning complex decision-making tasks. However, efficiently integrating world models with decision transformers remains a challenge. In …
View article: Differentiable VQ-VAE’s for Robust White Matter Streamline Encodings
Differentiable VQ-VAE’s for Robust White Matter Streamline Encodings Open
Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces. However, despite these recent successes, the maj…
View article: Improved BOLD Detection with Sliced Inverse Regression
Improved BOLD Detection with Sliced Inverse Regression Open
Functional magnetic resonance imaging (fMRI) has been effective in linking task-related brain responses to changes in blood oxygen level density (BOLD). However, its reliance on BOLD measurements makes it vulnerable to artifacts and false-…
View article: Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference
Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference Open
In tasks aiming for long-term returns, planning becomes essential. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of ste…
View article: SDSRA: A Skill-Driven Skill-Recombination Algorithm for Efficient Policy Learning
SDSRA: A Skill-Driven Skill-Recombination Algorithm for Efficient Policy Learning Open
In this paper, we introduce a novel algorithm - the Skill-Driven Skill Recombination Algorithm (SDSRA) - an innovative framework that significantly enhances the efficiency of achieving maximum entropy in reinforcement learning tasks. We fi…
View article: Differentiable VQ-VAE's for Robust White Matter Streamline Encodings
Differentiable VQ-VAE's for Robust White Matter Streamline Encodings Open
Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces. However, despite these recent successes, the maj…
View article: StreamNet: A WAE for White Matter Streamline Analysis
StreamNet: A WAE for White Matter Streamline Analysis Open
We present StreamNet, an autoencoder architecture for the analysis of the highly heterogeneous geometry of large collections of white matter streamlines. This proposed framework takes advantage of geometry-preserving properties of the Wass…
View article: Alignment of Tractography Streamlines using Deformation Transfer via\n Parallel Transport
Alignment of Tractography Streamlines using Deformation Transfer via\n Parallel Transport Open
We present a geometric framework for aligning white matter fiber tracts. By\nregistering fiber tracts between brains, one expects to see overlap of\nanatomical structures that often provide meaningful comparisons across\nsubjects. However,…
View article: SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic Functional Alignment
SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic Functional Alignment Open
We present SrvfNet, a generative deep learning framework for the joint multiple alignment of large collections of functional data comprising square-root velocity functions (SRVF) to their templates. Our proposed framework is fully unsuperv…
View article: SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic Shape Alignment
SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic Shape Alignment Open
We present SrvfNet, a generative deep learning framework for the joint multiple alignment of large collections of functional data comprising square-root velocity functions (SRVF) to their templates. Our proposed framework is fully unsuperv…
View article: Alignment of Tractography Streamlines Using Deformation Transfer via Parallel Transport
Alignment of Tractography Streamlines Using Deformation Transfer via Parallel Transport Open