Sulin Liu
YOU?
Author Swipe
View article: Generative Marginalization Models
Generative Marginalization Models Open
We introduce marginalization models (MAMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling by explicitly modeling all induced marginal distributions. Marginalizati…
View article: Sparse Bayesian Optimization
Sparse Bayesian Optimization Open
Bayesian optimization (BO) is a powerful approach to sample-efficient optimization of black-box objective functions. However, the application of BO to areas such as recommendation systems often requires taking the interpretability and simp…
View article: ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
ProBF: Learning Probabilistic Safety Certificates with Barrier Functions Open
Safety-critical applications require controllers/policies that can guarantee safety with high confidence. The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics. In practice,…
View article: The Landscape of Matrix Factorization Revisited
The Landscape of Matrix Factorization Revisited Open
We revisit the landscape of the simple matrix factorization problem. For low-rank matrix factorization, prior work has shown that there exist infinitely many critical points all of which are either global minima or strict saddles. At a str…
View article: Data Poisoning Attacks on Multi-Task Relationship Learning
Data Poisoning Attacks on Multi-Task Relationship Learning Open
Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to la…
View article: Adaptive Group Sparse Multi-task Learning via Trace Lasso
Adaptive Group Sparse Multi-task Learning via Trace Lasso Open
In multi-task learning (MTL), tasks are learned jointly so that information among related tasks is shared and utilized to help improve generalization for each individual task. A major challenge in MTL is how to selectively choose what to s…
View article: Distributed Multi-Task Relationship Learning
Distributed Multi-Task Relationship Learning Open
Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the task…