Xichen Ye
YOU?
Author Swipe
View article: Towards Robust Influence Functions with Flat Validation Minima
Towards Robust Influence Functions with Flat Validation Minima Open
The Influence Function (IF) is a widely used technique for assessing the impact of individual training samples on model predictions. However, existing IF methods often fail to provide reliable influence estimates in deep neural networks, p…
View article: Optimized Gradient Clipping for Noisy Label Learning
Optimized Gradient Clipping for Noisy Label Learning Open
Previous research has shown that constraining the gradient of loss function w.r.t. model-predicted probabilities can enhance the model robustness against noisy labels. These methods typically specify a fixed optimal threshold for gradient …
View article: Optimized Gradient Clipping for Noisy Label Learning
Optimized Gradient Clipping for Noisy Label Learning Open
Previous research has shown that constraining the gradient of loss function with respect to model-predicted probabilities can enhance the model robustness against noisy labels. These methods typically specify a fixed optimal threshold for …
View article: Revisiting Energy-Based Model for Out-of-Distribution Detection
Revisiting Energy-Based Model for Out-of-Distribution Detection Open
Out-of-distribution (OOD) detection is an essential approach to robustifying deep learning models, enabling them to identify inputs that fall outside of their trained distribution. Existing OOD detection methods usually depend on crafted d…
View article: Active Negative Loss: A Robust Framework for Learning with Noisy Labels
Active Negative Loss: A Robust Framework for Learning with Noisy Labels Open
Deep supervised learning has achieved remarkable success across a wide range of tasks, yet it remains susceptible to overfitting when confronted with noisy labels. To address this issue, noise-robust loss functions offer an effective solut…
View article: DuPL: Dual Student with Trustworthy Progressive Learning for Robust Weakly Supervised Semantic Segmentation
DuPL: Dual Student with Trustworthy Progressive Learning for Robust Weakly Supervised Semantic Segmentation Open
Recently, One-stage Weakly Supervised Semantic Segmentation (WSSS) with image-level labels has gained increasing interest due to simplification over its cumbersome multi-stage counterpart. Limited by the inherent ambiguity of Class Activat…
View article: GradPU: Positive-Unlabeled Learning via Gradient Penalty and Positive Upweighting
GradPU: Positive-Unlabeled Learning via Gradient Penalty and Positive Upweighting Open
Positive-unlabeled learning is an essential problem in many real-world applications with only labeled positive and unlabeled data, especially when the negative samples are difficult to identify. Most existing positive-unlabeled learning me…