Guibing Guo
YOU?
Author Swipe
View article: Research on High-Density Discrete Seismic Signal Denoising Processing Method Based on the SFOA-VMD Algorithm
Research on High-Density Discrete Seismic Signal Denoising Processing Method Based on the SFOA-VMD Algorithm Open
With the increasing demand for precision in seismic exploration, high-resolution surveys and shallow-layer identification have become essential. This requires higher sampling frequencies during seismic data acquisition, which shortens seis…
View article: EPT: Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion
EPT: Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion Open
Prompt tuning is a promising method to fine-tune a pre-trained language model without retraining its large-scale parameters. Instead, it attaches a soft prompt to the input text, whereby downstream tasks can be well adapted by merely learn…
View article: Augmenting Sequential Recommendation with Balanced Relevance and Diversity
Augmenting Sequential Recommendation with Balanced Relevance and Diversity Open
By generating new yet effective data, data augmentation has become a promising method to mitigate the data sparsity problem in sequential recommendation. Existing works focus on augmenting the original data but rarely explore the issue of …
View article: CoRA: Collaborative Information Perception by Large Language Model’s Weights for Recommendation
CoRA: Collaborative Information Perception by Large Language Model’s Weights for Recommendation Open
Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequen…
View article: Data Augmentation as Free Lunch: Exploring the Test-Time Augmentation for Sequential Recommendation
Data Augmentation as Free Lunch: Exploring the Test-Time Augmentation for Sequential Recommendation Open
Data augmentation has become a promising method of mitigating data sparsity in sequential recommendation. Existing methods generate new yet effective data during model training to improve performance. However, deploying them requires retra…
View article: Structured Outputs Enable General-Purpose LLMs to be Medical Experts
Structured Outputs Enable General-Purpose LLMs to be Medical Experts Open
Medical question-answering (QA) is a critical task for evaluating how effectively large language models (LLMs) encode clinical knowledge and assessing their potential applications in medicine. Despite showing promise on multiple-choice tes…
View article: Efficient and Effective Prompt Tuning via Prompt Decomposition and Compressed Outer Product
Efficient and Effective Prompt Tuning via Prompt Decomposition and Compressed Outer Product Open
Prompt tuning (PT) offers a cost-effective alternative to fine-tuning large-scale pre-trained language models (PLMs), requiring only a few parameters in soft prompt tokens added before the input text. However, existing PT approaches face t…
View article: Distributionally Robust Graph Out-of-Distribution Recommendation via Diffusion Model
Distributionally Robust Graph Out-of-Distribution Recommendation via Diffusion Model Open
The distributionally robust optimization (DRO)-based graph neural network methods improve recommendation systems' out-of-distribution (OOD) generalization by optimizing the model's worst-case performance. However, these studies fail to con…
View article: Augmenting Sequential Recommendation with Balanced Relevance and Diversity
Augmenting Sequential Recommendation with Balanced Relevance and Diversity Open
By generating new yet effective data, data augmentation has become a promising method to mitigate the data sparsity problem in sequential recommendation. Existing works focus on augmenting the original data but rarely explore the issue of …
View article: Efficient and Adaptive Recommendation Unlearning: A Guided Filtering Framework to Erase Outdated Preferences
Efficient and Adaptive Recommendation Unlearning: A Guided Filtering Framework to Erase Outdated Preferences Open
Recommendation unlearning is an emerging task to erase the influences of user-specified data from a trained recommendation model. Most existing research follows the paradigm of partitioning the original dataset into multi-fold and then ret…
View article: Self-supervised Hierarchical Representation for Medication Recommendation
Self-supervised Hierarchical Representation for Medication Recommendation Open
Medication recommender is to suggest appropriate medication combinations based on a patient's health history, e.g., diagnoses and procedures. Existing works represent different diagnoses/procedures well separated by one-hot encodings. Howe…
View article: Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging
Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging Open
Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer. Recent research on task arithmetic-based MTL demonstrates that merging the parameters of independently fine-tuned models can…
View article: SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task Learning with Deep Representation Surgery
SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task Learning with Deep Representation Surgery Open
Model merging-based multitask learning (MTL) offers a promising approach for performing MTL by merging multiple expert models without requiring access to raw training data. However, in this paper, we examine the merged model's representati…
View article: Data Augmentation for Sequential Recommendation: A Survey
Data Augmentation for Sequential Recommendation: A Survey Open
As an essential branch of recommender systems, sequential recommendation (SR) has received much attention due to its well-consistency with real-world situations. However, the widespread data sparsity issue limits the SR model's performance…
View article: CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation
CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation Open
Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequen…
View article: Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities Open
Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent acr…
View article: Symmetric Graph Contrastive Learning against Noisy Views for Recommendation
Symmetric Graph Contrastive Learning against Noisy Views for Recommendation Open
Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through learning the consistency between contrastive views. However, existing augmentati…
View article: Graph Representation Learning via Causal Diffusion for Out-of-Distribution Recommendation
Graph Representation Learning via Causal Diffusion for Out-of-Distribution Recommendation Open
Graph Neural Networks (GNNs)-based recommendation algorithms typically assume that training and testing data are drawn from independent and identically distributed (IID) spaces. However, this assumption often fails in the presence of out-o…
View article: Deconfounding User Preference in Recommendation Systems through Implicit and Explicit Feedback
Deconfounding User Preference in Recommendation Systems through Implicit and Explicit Feedback Open
Recommender systems are influenced by many confounding factors (i.e., confounders) which result in various biases (e.g., popularity biases) and inaccurate user preference. Existing approaches try to eliminate these biases by inference with…
View article: Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion
Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion Open
Prompt tuning is a promising method to fine-tune a pre-trained language model without retraining its large-scale parameters. Instead, it attaches a soft prompt to the input text, whereby downstream tasks can be well adapted by merely learn…