Warren R. Morningstar
YOU?
Author Swipe
View article: Forte : Finding Outliers with Representation Typicality Estimation
Forte : Finding Outliers with Representation Typicality Estimation Open
Generative models can now produce photorealistic synthetic data which is virtually indistinguishable from the real data used to train it. This is a significant evolution over previous models which could produce reasonable facsimiles of the…
View article: Federated Variational Inference: Towards Improved Personalization and Generalization
Federated Variational Inference: Towards Improved Personalization and Generalization Open
Conventional federated learning algorithms train a single global model by leveraging all participating clients’ data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropri…
View article: Augmentations vs Algorithms: What Works in Self-Supervised Learning
Augmentations vs Algorithms: What Works in Self-Supervised Learning Open
We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL). While the recent literature in this space leaves the impression that the pretraining algorithm is of cr…
View article: Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations
Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations Open
Self-Supervised Learning (SSL) enables training performant models using limited labeled data. One of the pillars underlying vision SSL is the use of data augmentations/perturbations of the input which do not significantly alter its semanti…
View article: SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer
SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer Open
Existing data augmentation in self-supervised learning, while diverse, fails to preserve the inherent structure of natural images. This results in distorted augmented samples with compromised semantic information, ultimately impacting down…
View article: Random Field Augmentations for Self-Supervised Representation Learning
Random Field Augmentations for Self-Supervised Representation Learning Open
Self-supervised representation learning is heavily dependent on data augmentations to specify the invariances encoded in representations. Previous work has shown that applying diverse data augmentations is crucial to downstream performance…
View article: Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout
Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout Open
Large machine learning models trained on diverse data have recently seen unprecedented success. Federated learning enables training on private data that may otherwise be inaccessible, such as domain-specific datasets decentralized across m…
View article: Federated Variational Inference: Towards Improved Personalization and Generalization
Federated Variational Inference: Towards Improved Personalization and Generalization Open
Conventional federated learning algorithms train a single global model by leveraging all participating clients' data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropri…
View article: Weighted Ensemble Self-Supervised Learning
Weighted Ensemble Self-Supervised Learning Open
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning. Advances in self-supervised learning (SSL) enable leveraging large unlabeled corpora for state-…
View article: Federated Training of Dual Encoding Models on Small Non-IID Client Datasets
Federated Training of Dual Encoding Models on Small Non-IID Client Datasets Open
Dual encoding models that encode a pair of inputs are widely used for representation learning. Many approaches train dual encoding models by maximizing agreement between pairs of encodings on centralized training data. However, in many sce…
View article: What Do We Mean by Generalization in Federated Learning?
What Do We Mean by Generalization in Federated Learning? Open
Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from local data distributions. Thus generalization studies in federated learning should separate pe…
View article: VIB is Half Bayes
VIB is Half Bayes Open
In discriminative settings such as regression and classification there are two random variables at play, the inputs X and the targets Y. Here, we demonstrate that the Variational Information Bottleneck can be viewed as a compromise between…
View article: VIB is Half Bayes
VIB is Half Bayes Open
In discriminative settings such as regression and classification there are two random variables at play, the inputs X and the targets Y. Here, we demonstrate that the Variational Information Bottleneck can be viewed as a compromise between…
View article: Hunting for Dark Matter Subhalos in Strong Gravitational Lensing with Neural Networks
Hunting for Dark Matter Subhalos in Strong Gravitational Lensing with Neural Networks Open
Dark matter substructures are interesting since they can reveal the properties of dark matter. Collisionless N-body simulations of cold dark matter show more substructures compared with the population of dwarf galaxy satellites observed in…
View article: PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime
PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime Open
The Bayesian posterior minimizes the "inferential risk" which itself bounds the "predictive risk". This bound is tight when the likelihood and prior are well-specified. However since misspecification induces a gap, the Bayesian posterior p…
View article: Density of States Estimation for Out-of-Distribution Detection
Density of States Estimation for Out-of-Distribution Detection Open
Perhaps surprisingly, recent studies have shown probabilistic model likelihoods have poor specificity for out-of-distribution (OOD) detection and often assign higher likelihoods to OOD data than in-distribution data. To ameliorate this iss…
View article: Automatic Differentiation Variational Inference with Mixtures
Automatic Differentiation Variational Inference with Mixtures Open
Automatic Differentiation Variational Inference (ADVI) is a useful tool for efficiently learning probabilistic models in machine learning. Generally approximate posteriors learned by ADVI are forced to be unimodal in order to facilitate us…
View article: Data-driven Reconstruction of Gravitationally Lensed Galaxies Using Recurrent Inference Machines
Data-driven Reconstruction of Gravitationally Lensed Galaxies Using Recurrent Inference Machines Open
We present a machine-learning method for the reconstruction of the undistorted images of background sources in strongly lensed systems. This method treats the source as a pixelated image and utilizes the recurrent inference machine to iter…
View article: Source Structure and Molecular Gas Properties from High-resolution CO Imaging of SPT-selected Dusty Star-forming Galaxies
Source Structure and Molecular Gas Properties from High-resolution CO Imaging of SPT-selected Dusty Star-forming Galaxies Open
We present Atacama Large Millimeter/submillimeter Array (ALMA) observations of high- J CO lines ( J up = 6, 7, 8) and associated dust continuum toward five strongly lensed, dusty, star-forming galaxies at redshift z = 2.7–5.7. These galaxi…
View article: Fast molecular outflow from a dusty star-forming galaxy in the early Universe
Fast molecular outflow from a dusty star-forming galaxy in the early Universe Open
Molecular gas ejected from a distant galaxy Galaxies grow by forming stars from cold molecular gas. The rate at which they do so is limited by various feedback processes (such as supernovae or stellar winds) that heat and/or eject gas from…
View article: Analyzing interferometric observations of strong gravitational lenses with recurrent and convolutional neural networks
Analyzing interferometric observations of strong gravitational lenses with recurrent and convolutional neural networks Open
We use convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to estimate the parameters of strong gravitational lenses from interferometric observations. We explore multiple strategies and find that the best results are…