Wonbin Kweon
YOU?
Author Swipe
View article: BPL: Bias-Adaptive Preference Distillation Learning For Recommender System
BPL: Bias-Adaptive Preference Distillation Learning For Recommender System Open
Recommender systems suffer from biases that cause the collected feedback to incompletely reveal user preference. While debiasing learning has been extensively studied, they mostly focused on the specialized (called counterfactual) test env…
View article: Topic Coverage-based Demonstration Retrieval for In-Context Learning
Topic Coverage-based Demonstration Retrieval for In-Context Learning Open
The effectiveness of in-context learning relies heavily on selecting demonstrations that provide all the necessary information for a given test input. To achieve this, it is crucial to identify and cover fine-grained knowledge requirements…
View article: Q-Align: Alleviating Attention Leakage in Zero-Shot Appearance Transfer via Query-Query Alignment
Q-Align: Alleviating Attention Leakage in Zero-Shot Appearance Transfer via Query-Query Alignment Open
We observe that zero-shot appearance transfer with large-scale image generation models faces a significant challenge: Attention Leakage. This challenge arises when the semantic mapping between two images is captured by the Query-Key alignm…
View article: Uncertainty Quantification and Decomposition for LLM-based Recommendation
Uncertainty Quantification and Decomposition for LLM-based Recommendation Open
Despite the widespread adoption of large language models (LLMs) for recommendation, we demonstrate that LLMs often exhibit uncertainty in their recommendations. To ensure the trustworthy use of LLMs in generating recommendations, we emphas…
View article: Improving Scientific Document Retrieval with Concept Coverage-based Query Set Generation
Improving Scientific Document Retrieval with Concept Coverage-based Query Set Generation Open
In specialized fields like the scientific domain, constructing large-scale human-annotated datasets poses a significant challenge due to the need for domain expertise. Recent methods have employed large language models to generate syntheti…
View article: Controlling Diversity at Inference: Guiding Diffusion Recommender Models with Targeted Category Preferences
Controlling Diversity at Inference: Guiding Diffusion Recommender Models with Targeted Category Preferences Open
Diversity control is an important task to alleviate bias amplification and filter bubble problems. The desired degree of diversity may fluctuate based on users' daily moods or business strategies. However, existing methods for controlling …
View article: Continual Collaborative Distillation for Recommender System
Continual Collaborative Distillation for Recommender System Open
Knowledge distillation (KD) has emerged as a promising technique for addressing the computational challenges associated with deploying large-scale recommender systems. KD transfers the knowledge of a massive teacher system to a compact stu…
View article: Continual Collaborative Distillation for Recommender System
Continual Collaborative Distillation for Recommender System Open
Knowledge distillation (KD) has emerged as a promising technique for addressing the computational challenges associated with deploying large-scale recommender systems. KD transfers the knowledge of a massive teacher system to a compact stu…
View article: Top-Personalized-K Recommendation
Top-Personalized-K Recommendation Open
Recommender systems often suffer from selection bias as users tend to rate\ntheir preferred items. The datasets collected under such conditions exhibit\nentries missing not at random and thus are not randomized-controlled trials\nrepresent…
View article: Rectifying Demonstration Shortcut in In-Context Learning
Rectifying Demonstration Shortcut in In-Context Learning Open
Large language models (LLMs) are able to solve various tasks with only a few demonstrations utilizing their in-context learning (ICL) abilities. However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on…
View article: Confidence Calibration for Recommender Systems and Its Applications
Confidence Calibration for Recommender Systems and Its Applications Open
Despite the importance of having a measure of confidence in recommendation results, it has been surprisingly overlooked in the literature compared to the accuracy of the recommendation. In this dissertation, I propose a model calibration f…
View article: Deep Rating Elicitation for New Users in Collaborative Filtering
Deep Rating Elicitation for New Users in Collaborative Filtering Open
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations. The key challenge of the rating elicitation…
View article: Unbiased, Effective, and Efficient Distillation from Heterogeneous Models for Recommender Systems
Unbiased, Effective, and Efficient Distillation from Heterogeneous Models for Recommender Systems Open
In recent years, recommender systems have achieved remarkable performance by using ensembles of heterogeneous models. However, this approach is costly due to the resources and inference latency proportional to the number of models, creatin…
View article: Distillation from Heterogeneous Models for Top-K Recommendation
Distillation from Heterogeneous Models for Top-K Recommendation Open
Recent recommender systems have shown remarkable performance by using an ensemble of heterogeneous models. However, it is exceedingly costly because it requires resources and inference latency proportional to the number of models, which re…
View article: Obtaining Calibrated Probabilities with Personalized Ranking Models
Obtaining Calibrated Probabilities with Personalized Ranking Models Open
For personalized ranking models, the well-calibrated probability of an item being preferred by a user has great practical value. While existing work shows promising results in image classification, probability calibration has not been much…
View article: Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering
Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering Open
Over the past decades, for One-Class Collaborative Filtering (OCCF), many learning objectives have been researched based on a variety of underlying probabilistic models. From our analysis, we observe that models trained with different OCCF…
View article: Obtaining Calibrated Probabilities with Personalized Ranking Models
Obtaining Calibrated Probabilities with Personalized Ranking Models Open
For personalized ranking models, the well-calibrated probability of an item being preferred by a user has great practical value. While existing work shows promising results in image classification, probability calibration has not been much…
View article: Topology Distillation for Recommender System
Topology Distillation for Recommender System Open
Recommender Systems (RS) have employed knowledge distillation which is a model compression technique training a compact student model with the knowledge transferred from a pre-trained large teacher model. Recent work has shown that transfe…
View article: Deep Rating Elicitation for New Users in Collaborative Filtering
Deep Rating Elicitation for New Users in Collaborative Filtering Open
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations. The key challenge of the rating elicitation…