Importance sampling ≈ Importance sampling
View article
<i>Stan</i>: A Probabilistic Programming Language Open
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides…
View article
dynesty: a dynamic nested sampling package for estimating Bayesian posteriors and evidences Open
We present dynesty, a public, open-source, python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using the dynamic nested sampling methods developed by Higson et al. By adaptively allocating samples based on p…
View article
Importance Nested Sampling and the MultiNest Algorithm Open
Bayesian inference involves two main computational challenges. First, in\nestimating the parameters of some model for the data, the posterior\ndistribution may well be highly multi-modal: a regime in which the convergence\nto stationarity …
View article
FastGCN: Fast Learning with Graph Convolutional Networks via Importance\n Sampling Open
The graph convolutional networks (GCN) recently proposed by Kipf and Welling\nare an effective graph model for semi-supervised learning. This model, however,\nwas originally designed to be learned with the presence of both training and\nte…
View article
The Curious Case of Neural Text Degeneration Open
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is …
View article
FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling Open
The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. This model, however, was originally designed to be learned with the presence of both training and test …
View article
Improved Denoising Diffusion Probabilistic Models Open
Denoising diffusion probabilistic models (DDPM) are a class of generative models which have recently been shown to produce excellent samples. We show that with a few simple modifications, DDPMs can also achieve competitive log-likelihoods …
View article
Types of sampling in research Open
Sampling is one of the most important factors which determines the accuracy of a study. This article review the sampling techniques used in research including Probability sampling techniques, which include simple random sampling, systemati…
View article
Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities Open
Summary Sampling from various kinds of distribution is an issue of paramount importance in statistics since it is often the key ingredient for constructing estimators, test procedures or confidence intervals. In many situations, exact samp…
View article
Data Analysis Recipes: Using Markov Chain Monte Carlo* Open
Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data…
View article
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Open
Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in su…
View article
Remote estimation of the Wiener process over a channel with random delay Open
In this paper, we consider a problem of sampling a Wiener process, with samples forwarded to a remote estimator via a channel that consists of a queue with random delay. The estimator reconstructs a real-time estimate of the signal from ca…
View article
Sampling of the Wiener Process for Remote Estimation Over a Channel With Random Delay Open
In this paper, we consider a problem of sampling a Wiener process, with samples forwarded to a remote estimator over a channel that is modeled as a queue. The estimator reconstructs an estimate of the real-time signal value from causally r…
View article
Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models Open
Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric functi…
View article
Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches Open
Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the ex…
View article
Modern Monte Carlo methods for efficient uncertainty quantification and propagation: A survey Open
Uncertainty quantification (UQ) includes the characterization, integration, and propagation of uncertainties that result from stochastic variations and a lack of knowledge or data in the natural world. Monte Carlo (MC) method is a sampling…
View article
Comparison of quota sampling and stratified random sampling Open
The possibility that researchers should be able to obtain data from all cases is questionable. There is a need; therefore, this article provides a probability and non-probability sampling. In this paper we studied the differences and simil…
View article
Control Functionals for Monte Carlo Integration Open
Summary A non-parametric extension of control variates is presented. These leverage gradient information on the sampling density to achieve substantial variance reduction. It is not required that the sampling density be normalized. The nov…
View article
Reparameterizing discontinuous integrands for differentiable rendering Open
Differentiable rendering has recently opened the door to a number of challenging inverse problems involving photorealistic images, such as computational material design and scattering-aware reconstruction of geometry and materials from pho…
View article
Transitional Markov Chain Monte Carlo: Observations and Improvements Open
The Transitional Markov chain Monte Carlo (TMCMC) method is a widely used method for Bayesian updating and Bayesian model class selection. The method is based on successively sampling from a sequence of distributions that gradually approac…
View article
Generalized Multiple Importance Sampling Open
Importance Sampling methods are broadly used to approximate posterior\ndistributions or some of their moments. In its standard approach, samples are\ndrawn from a single proposal distribution and weighted properly. However, since\nthe perf…
View article
Unified Approach to Enhanced Sampling Open
The sampling problem lies at the heart of atomistic simulations and over the years many different enhanced sampling methods have been suggested toward its solution. These methods are often grouped into two broad families. On the one hand, …
View article
Demonstrating an Order-of-Magnitude Sampling Enhancement in Molecular Dynamics Simulations of Complex Protein Systems Open
Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulatio…
View article
Unbiased warped-area sampling for differentiable rendering Open
Differentiable rendering computes derivatives of the light transport equation with respect to arbitrary 3D scene parameters, and enables various applications in inverse rendering and machine learning. We present an unbiased and efficient d…
View article
A practical and efficient approach for Bayesian quantum state estimation Open
Bayesian inference is a powerful paradigm for quantum state tomography, treating uncertainty in meaningful and informative ways. Yet the numerical challenges associated with sampling from complex probability distributions hampers Bayesian …
View article
Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation Open
We consider the off-policy estimation problem of estimating the expected reward of a target policy using samples collected by a different behavior policy. Importance sampling (IS) has been a key technique to derive (nearly) unbiased estima…
View article
Aether Open
Implementing Monte Carlo integration requires significant domain expertise. While simple samplers, such as unidirectional path tracing, are relatively forgiving, more complex algorithms, such as bidirectional path tracing or Metropolis met…
View article
Not All Samples Are Created Equal: Deep Learning with Importance Sampling Open
Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on "infor…
View article
Multi-Step Reinforcement Learning: A Unifying Algorithm Open
Unifying seemingly disparate algorithmic ideas to produce better performing algorithms has been a longstanding goal in reinforcement learning. As a primary example, TD(λ) elegantly unifies one-step TD prediction with Monte Carlo methods th…
View article
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems Open
We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural netw…