Kun-Peng Ning
YOU?
Author Swipe
View article: Experimental investigation on the effect of solid particle erosion on the water droplet erosion damages for blade materials
Experimental investigation on the effect of solid particle erosion on the water droplet erosion damages for blade materials Open
View article: Sparse Orthogonal Parameters Tuning for Continual Learning
Sparse Orthogonal Parameters Tuning for Continual Learning Open
Continual learning methods based on pre-trained models (PTM) have recently gained attention which adapt to successive downstream tasks without catastrophic forgetting. These methods typically refrain from updating the pre-trained parameter…
View article: Bidirectional Uncertainty-Based Active Learning for Open Set Annotation
Bidirectional Uncertainty-Based Active Learning for Open Set Annotation Open
Active learning (AL) in open set scenarios presents a novel challenge of identifying the most valuable examples in an unlabeled data pool that comprises data from both known and unknown classes. Traditional methods prioritize selecting inf…
View article: LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples Open
Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be knowledgeable and able to adapt to many tasks. However, we still cannot completely trust their answers, since LLMs suffer from \textbf{hallucination}\textemdash f…
View article: Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search Open
Search query classification, as an effective way to understand user intents, is of great importance in real-world online ads systems. To ensure a lower latency, a shallow model (e.g. FastText) is widely used for efficient online inference.…
View article: Active Learning for Open-set Annotation
Active Learning for Open-set Annotation Open
Existing active learning studies typically work in the closed-set setting by assuming that all data examples to be labeled are drawn from known classes. However, in real annotation tasks, the unlabeled data usually contains a large amount …
View article: Asynchronous Active Learning with Distributed Label Querying
Asynchronous Active Learning with Distributed Label Querying Open
Active learning tries to learn an effective model with lowest labeling cost. Most existing active learning methods work in a synchronous way, which implies that the label querying can be performed only after the model updating in each iter…
View article: Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries
Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries Open
In addition to high accuracy, robustness is becoming increasingly important for machine learning models in various applications. Recently, much research has been devoted to improving the model robustness by training with noise perturbation…
View article: Co-Imitation Learning without Expert Demonstration
Co-Imitation Learning without Expert Demonstration Open
Imitation learning is a primary approach to improve the efficiency of reinforcement learning by exploiting the expert demonstrations. However, in many real scenarios, obtaining expert demonstrations could be extremely expensive or even imp…
View article: Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries
Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries Open
In addition to high accuracy, robustness is becoming increasingly important for machine learning models in various applications. Recently, much research has been devoted to improving the model robustness by training with noise perturbation…
View article: Reinforcement Learning with Supervision from Noisy Demonstrations
Reinforcement Learning with Supervision from Noisy Demonstrations Open
Reinforcement learning has achieved great success in various applications. To learn an effective policy for the agent, it usually requires a huge amount of data by interacting with the environment, which could be computational costly and t…
View article: Tackle Balancing Constraint for Incremental Semi-Supervised Support Vector Learning
Tackle Balancing Constraint for Incremental Semi-Supervised Support Vector Learning Open
Semi-Supervised Support Vector Machine (S3VM) is one of the most popular methods for semi-supervised learning. To avoid the trivial solution of classifying all the unlabeled examples to a same class, balancing constraint is often used with…