Yiming Ying
YOU?
Author Swipe
View article: How to Explore the Potential of Green Bonds?<br>&#8212;Based on Propensity Score Matching Method
How to Explore the Potential of Green Bonds?—Based on Propensity Score Matching Method Open
View article: Accuracy of machine learning in diagnosing microsatellite instability in gastric cancer: A systematic review and meta-analysis
Accuracy of machine learning in diagnosing microsatellite instability in gastric cancer: A systematic review and meta-analysis Open
ML has demonstrated optimal performance in detecting MSI in GC and could serve as a prospective early adjunctive detection tool for MSI in GC. Future research should contemplate minimally invasive or non-invasive, readily collectible, and …
View article: On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning
On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning Open
We study the discriminative probabilistic modeling on a continuous domain for the data prediction task of (multimodal) self-supervised representation learning. To address the challenge of computing the integral in the partition function fo…
View article: Differentially private stochastic gradient descent with low-noise
Differentially private stochastic gradient descent with low-noise Open
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance …
View article: Differentially Private Non-convex Learning for Multi-layer Neural Networks
Differentially Private Non-convex Learning for Multi-layer Neural Networks Open
This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node. In the first part, we examine cases with no hidden nodes, specifically focusin…
View article: Outlier Robust Adversarial Training
Outlier Robust Adversarial Training Open
Supervised learning models are challenged by the intrinsic complexities of training data such as outliers and minority subpopulations and intentional attacks at inference time with adversarial samples. While traditional robust learning met…
View article: Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms Open
Many machine learning tasks can be formulated as a stochastic compositional optimization (SCO) problem such as reinforcement learning, AUC maximization, and meta-learning, where the objective function involves a nested composition associat…
View article: Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Minimax AUC Fairness: Efficient Algorithm with Provable Convergence Open
The use of machine learning models in consequential decision making often exacerbates societal inequity, in particular yielding disparate impact on members of marginalized groups defined by race and gender. The area under the ROC curve (AU…
View article: Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance Open
Multi-objective learning (MOL) problems often arise in emerging machine learning problems when there are multiple learning criteria, data modalities, or learning tasks. Different from single-objective learning, one of the critical challeng…
View article: Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks
Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks Open
Recently, significant progress has been made in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-…
View article: Fairness-aware Differentially Private Collaborative Filtering
Fairness-aware Differentially Private Collaborative Filtering Open
Recently, there has been an increasing adoption of differential privacy\nguided algorithms for privacy-preserving machine learning tasks. However, the\nuse of such algorithms comes with trade-offs in terms of algorithmic fairness,\nwhich h…
View article: Unmixing biological fluorescence image data with sparse and low-rank Poisson regression
Unmixing biological fluorescence image data with sparse and low-rank Poisson regression Open
Motivation Multispectral biological fluorescence microscopy has enabled the identification of multiple targets in complex samples. The accuracy in the unmixing result degrades (i) as the number of fluorophores used in any experiment increa…
View article: Generalization Analysis for Contrastive Representation Learning
Generalization Analysis for Contrastive Representation Learning Open
Recently, contrastive learning has found impressive success in advancing the state of the art in solving various machine learning tasks. However, the existing generalization analysis is very limited or even not meaningful. In particular, t…
View article: Unmixing Biological Fluorescence Image Data with Sparse and Low-Rank Poisson Regression
Unmixing Biological Fluorescence Image Data with Sparse and Low-Rank Poisson Regression Open
Multispectral biological fluorescence microscopy has enabled the identification of multiple targets in complex samples. The accuracy in the unmixing result degrades (1) as the number of fluorophores used in any experiment increases and (2)…
View article: Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks Open
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networ…
View article: Stability and Generalization for Markov Chain Stochastic Gradient Methods
Stability and Generalization for Markov Chain Stochastic Gradient Methods Open
Recently there is a large amount of work devoted to the study of Markov chain stochastic gradient methods (MC-SGMs) which mainly focus on their convergence analysis for solving minimization problems. In this paper, we provide a comprehensi…
View article: Differentially Private Stochastic Gradient Descent with Low-Noise
Differentially Private Stochastic Gradient Descent with Low-Noise Open
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance …
View article: Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Minimax AUC Fairness: Efficient Algorithm with Provable Convergence Open
The use of machine learning models in consequential decision making often exacerbates societal inequity, in particular yielding disparate impact on members of marginalized groups defined by race and gender. The area under the ROC curve (AU…
View article: AUC Maximization in the Era of Big Data and AI: A Survey
AUC Maximization in the Era of Big Data and AI: A Survey Open
Area under the ROC curve, a.k.a. AUC, is a measure of choice for assessing the performance of a classifier for imbalanced data. AUC maximization refers to a learning paradigm that learns a predictive model by directly maximizing its AUC sc…
View article: AUC Maximization in the Era of Big Data and AI: A Survey
AUC Maximization in the Era of Big Data and AI: A Survey Open
Area under the ROC curve, a.k.a. AUC, is a measure of choice for assessing the performance of a classifier for imbalanced data. AUC maximization refers to a learning paradigm that learns a predictive model by directly maximizing its AUC sc…
View article: Differentially Private SGDA for Minimax Problems
Differentially Private SGDA for Minimax Problems Open
Stochastic gradient descent ascent (SGDA) and its variants have been the workhorse for solving minimax problems. However, in contrast to the well-studied stochastic gradient descent (SGD) with differential privacy (DP) constraints, there i…
View article: Message from the IUCC 2021 General Chairs IUCC/CIT/DSCI/SmartCNS 2021
Message from the IUCC 2021 General Chairs IUCC/CIT/DSCI/SmartCNS 2021 Open
Welcome to the
View article: Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise Learning.
Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise Learning. Open
Pairwise learning refers to learning tasks where the loss function depends on a pair of instances. It instantiates many important machine learning tasks such as bipartite ranking and metric learning. A popular approach to handle streaming …
View article: Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning
Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning Open
Pairwise learning refers to learning tasks where the loss function depends on a pair of instances. It instantiates many important machine learning tasks such as bipartite ranking and metric learning. A popular approach to handle streaming …
View article: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems
Stability and Generalization of Stochastic Gradient Methods for Minimax Problems Open
Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs), AUC maximization and robust estimation, to mention but a few. A substantial amount of studies are devoted to studying the …
View article: Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning
Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning Open
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the ``episode'' idea by sampling a few tasks an…
View article: Sum of Ranked Range Loss for Supervised Learning
Sum of Ranked Range Loss for Supervised Learning Open
In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output. Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in…
View article: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems
Stability and Generalization of Stochastic Gradient Methods for Minimax Problems Open
Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs), AUC maximization and robust estimation, to mention but a few. A substantial amount of studies are devoted to studying the …
View article: Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss
Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss Open
Pairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stabili…
View article: Patterns of mega-forest fires in east Siberia will become less predictable with climate warming
Patterns of mega-forest fires in east Siberia will become less predictable with climate warming Open
Very large fires covering tens to hundreds of hectares, termed mega-fires, have become a prominent feature of fire regime in taiga forests worldwide, and in Siberia in particular. Here, we applied an array of machine learning algorithms an…