Yuansi Chen
YOU?
Author Swipe
View article: When few labeled target data suffice: a theory of semi-supervised domain adaptation via fine-tuning from multiple adaptive starts
When few labeled target data suffice: a theory of semi-supervised domain adaptation via fine-tuning from multiple adaptive starts Open
Semi-supervised domain adaptation (SSDA) aims to achieve high predictive performance in the target domain with limited labeled target data by exploiting abundant source and unlabeled target data. Despite its significance in numerous applic…
View article: Research on Dynamic Modeling and Control of Magnetorheological Hydro-Pneumatic Suspension
Research on Dynamic Modeling and Control of Magnetorheological Hydro-Pneumatic Suspension Open
A novel magnetorheological semi-active hydro-pneumatic suspension system was proposed to overcome the shortcoming of the traditional hydro-pneumatic suspension without adaptive vibration damping function. It is based on the magnetorheologi…
View article: Regularized Dikin Walks for Sampling Truncated Logconcave Measures, Mixed Isoperimetry and Beyond Worst-Case Analysis
Regularized Dikin Walks for Sampling Truncated Logconcave Measures, Mixed Isoperimetry and Beyond Worst-Case Analysis Open
We study the problem of drawing samples from a logconcave distribution truncated on a polytope, motivated by computational challenges in Bayesian statistical models with indicator variables, such as probit regression. Building on interior …
View article: Prominent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
Prominent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms Open
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model. While many DA algorithms have demonstrated…
View article: Hit-and-run mixing via localization schemes
Hit-and-run mixing via localization schemes Open
We analyze the hit-and-run algorithm for sampling uniformly from an isotropic convex body $K$ in $n$ dimensions. We show that the algorithm mixes in time $\tilde{O}(n^2/ ψ_n^2)$, where $ψ_n$ is the smallest isoperimetric constant for any i…
View article: Localization Schemes: A Framework for Proving Mixing Bounds for Markov Chains
Localization Schemes: A Framework for Proving Mixing Bounds for Markov Chains Open
Two recent and seemingly-unrelated techniques for proving mixing bounds for Markov chains are: (i) the framework of Spectral Independence, introduced by Anari, Liu and Oveis Gharan, and its numerous extensions, which have given rise to sev…
View article: Domain adaptation under structural causal models
Domain adaptation under structural causal models Open
Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model. Recent advances in DA have mainly been application-d…
View article: Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling
Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling Open
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. We establish its optimal minimax mixing time under a warm start. Our main contribution is t…
View article: A Look at Robustness and Stability of $\ell_{1}$-versus $\ell_{0}$-Regularization: Discussion of Papers by Bertsimas et al. and Hastie et al.
A Look at Robustness and Stability of $\ell_{1}$-versus $\ell_{0}$-Regularization: Discussion of Papers by Bertsimas et al. and Hastie et al. Open
We congratulate the authors Bertsimas, Pauphilet and van Parys (hereafter BPvP) and Hastie, Tibshirani and Tibshirani (hereafter HTT) for providing fresh and insightful views on the problem of variable selection and prediction in linear mo…
View article: Domain adaptation under structural causal models
Domain adaptation under structural causal models Open
Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model. Recent advances in DA have mainly been application-d…
View article: Sampling can be faster than optimization
Sampling can be faster than optimization Open
Significance Modern large-scale data analysis and machine learning applications rely critically on computationally efficient algorithms. There are 2 main classes of algorithms used in this setting—those based on optimization and those base…
View article: Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients
Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients Open
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling algorithm for drawing samples from smooth probability densities over continuous spaces. We study the variant most widely used in practice, Metropolized H…
View article: Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of\n multi-step gradients
Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of\n multi-step gradients Open
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo\nsampling algorithm for drawing samples from smooth probability densities over\ncontinuous spaces. We study the variant most widely used in practice,\nMetropolize…
View article: The DeepTune framework for modeling and characterizing neurons in visual cortex area V4
The DeepTune framework for modeling and characterizing neurons in visual cortex area V4 Open
Deep neural network models have recently been shown to be effective in predicting single neuron responses in primate visual cortex areas V4. Despite their high predictive accuracy, these models are generally difficult to interpret. This li…
View article: Log-concave sampling: Metropolis-Hastings algorithms are fast
Log-concave sampling: Metropolis-Hastings algorithms are fast Open
We consider the problem of sampling from a strongly log-concave density in $\mathbb{R}^d$, and prove a non-asymptotic upper bound on the mixing time of the Metropolis-adjusted Langevin algorithm (MALA). The method draws samples by simulati…
View article: Fast MCMC sampling algorithms on polytopes
Fast MCMC sampling algorithms on polytopes Open
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope. Both random walks are sampling algorithms derived from interior point methods…
View article: Self-calibrating neural networks for dimensionality reduction
Self-calibrating neural networks for dimensionality reduction Open
Recently, a novel family of biologically plausible online algorithms for\nreducing the dimensionality of streaming data has been derived from the\nsimilarity matching principle. In these algorithms, the number of output\ndimensions can be …
View article: Design of Low Phase Noise and Low Stray Frequency Source Based on FPGA+HMC833
Design of Low Phase Noise and Low Stray Frequency Source Based on FPGA+HMC833 Open
Microwave resonator sensor has been applied to various fields of measurement.A frequency source with low phase noise, low stray, low power consumption, small step is the key to measurement of microwave resonator sensor.In order to adapt th…