Andrew Stirn
YOU?
Author Swipe
View article: Generative genomics accurately predicts future experimental results
Generative genomics accurately predicts future experimental results Open
Realizing AI’s promise to accelerate biomedical research requires AI models that are both accurate and sufficiently flexible to capture the diversity of real-life experiments. Here, we describe a generative genomics framework for AI-based …
View article: Cas13d-mediated isoform-specific RNA knockdown with a unified computational and experimental toolbox
Cas13d-mediated isoform-specific RNA knockdown with a unified computational and experimental toolbox Open
Pre- and post-transcriptional mechanisms, including alternative promoters, termination signals, and splicing, play essential roles in diversifying protein output by generating distinct RNA and protein isoforms. Two major challenges in char…
View article: The VampPrior Mixture Model
The VampPrior Mixture Model Open
Widely used deep latent variable models (DLVMs), in particular Variational Autoencoders (VAEs), employ overly simplistic priors on the latent space. To achieve strong clustering performance, existing methods that replace the standard norma…
View article: Cas13d-mediated isoform-specific RNA knockdown with a unified computational and experimental toolbox
Cas13d-mediated isoform-specific RNA knockdown with a unified computational and experimental toolbox Open
Alternative splicing is an essential mechanism for diversifying proteins, in which mature RNA isoforms produce proteins with potentially distinct functions. Two major challenges in characterizing the cellular function of isoforms are the l…
View article: Faithful Heteroscedastic Regression with Neural Networks
Faithful Heteroscedastic Regression with Neural Networks Open
Heteroscedastic regression models a Gaussian variable's mean and variance as a function of covariates. Parametric methods that employ neural networks for these parameter maps can capture complex relationships in the data. Yet, optimizing n…
View article: Variational Variance: Simple and Reliable Predictive Variance Parameterization
Variational Variance: Simple and Reliable Predictive Variance Parameterization Open
An often overlooked sleight of hand performed with variational autoencoders (VAEs), which has proliferated the literature, is to misrepresent the posterior predictive (decoder) distribution's expectation as a sample from that distribution.…
View article: Variational Variance: Simple, Reliable, Calibrated Heteroscedastic Noise Variance Parameterization
Variational Variance: Simple, Reliable, Calibrated Heteroscedastic Noise Variance Parameterization Open
Brittle optimization has been observed to adversely impact model likelihoods for regression and VAEs when simultaneously fitting neural network mappings from a (random) variable onto the mean and variance of a dependent Gaussian variable. …
View article: A New Distribution on the Simplex with Auto-Encoding Applications
A New Distribution on the Simplex with Auto-Encoding Applications Open
We construct a new distribution for the simplex using the Kumaraswamy distribution and an ordered stick-breaking process. We explore and develop the theoretical properties of this new distribution and prove that it exhibits symmetry under …
View article: Autoencoding Topographic Factors
Autoencoding Topographic Factors Open
Topographic factor models separate overlapping signals into latent spatial functions to identify correlation structure across observations. These methods require the underlying structure to be held fixed and are not robust to deviations…
View article: Thompson Sampling for Noncompliant Bandits
Thompson Sampling for Noncompliant Bandits Open
Thompson sampling, a Bayesian method for balancing exploration and exploitation in bandit problems, has theoretical guarantees and exhibits strong empirical performance in many domains. Traditional Thompson sampling, however, assumes perfe…