Wenpeng Lü
YOU?
Author Swipe
View article: ALSA: Context-Sensitive Prompt Privacy Preservation in Large Language Models
ALSA: Context-Sensitive Prompt Privacy Preservation in Large Language Models Open
View article: Divide-Then-Rule: A Cluster-Driven Hierarchical Interpolator for Attribute-Missing Graphs
Divide-Then-Rule: A Cluster-Driven Hierarchical Interpolator for Attribute-Missing Graphs Open
Deep graph clustering (DGC) for attribute-missing graphs is an unsupervised task aimed at partitioning nodes with incomplete attributes into distinct clusters. Addressing this challenging issue is vital for practical applications. However,…
View article: CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models Open
Investigating hallucination issues in large language models (LLMs) within cross-lingual and cross-modal scenarios can greatly advance the large-scale deployment in real-world applications. Nevertheless, the current studies are limited to a…
View article: Time-aware Medication Recommendation via Intervention of Dynamic Treatment Regimes
Time-aware Medication Recommendation via Intervention of Dynamic Treatment Regimes Open
View article: Learning Together Securely: Prototype-Based Federated Multi-Modal Hashing for Safe and Efficient Multi-Modal Retrieval
Learning Together Securely: Prototype-Based Federated Multi-Modal Hashing for Safe and Efficient Multi-Modal Retrieval Open
With the proliferation of multi-modal data, safe and efficient multi-modal hashing retrieval has become a pressing research challenge, particularly due to concerns over data privacy during centralized processing. To address this, we propos…
View article: SRLCG: Self-Rectified Large-Scale Code Generation with Multidimensional Chain-of-Thought and Dynamic Backtracking
SRLCG: Self-Rectified Large-Scale Code Generation with Multidimensional Chain-of-Thought and Dynamic Backtracking Open
Large language models (LLMs) have revolutionized code generation, significantly enhancing developer productivity. However, for a vast number of users with minimal coding knowledge, LLMs provide little support, as they primarily generate is…
View article: WindowKV: Task-Adaptive Group-Wise KV Cache Window Selection for Efficient LLM Inference
WindowKV: Task-Adaptive Group-Wise KV Cache Window Selection for Efficient LLM Inference Open
With the advancements in long-context inference capabilities of large language models (LLMs), the KV cache has become one of the foundational components. However, its substantial GPU memory consumption makes KV cache compression a key tech…
View article: Large Language Model for Medical Images: A Survey of Taxonomy, Systematic Review, and Future Trends
Large Language Model for Medical Images: A Survey of Taxonomy, Systematic Review, and Future Trends Open
The advent of Large Language Models (LLMs) has sparked considerable interest in the medical image domain, as they can generalize to multiple tasks and offer outstanding performance. While LLMs achieve promising results, there is currently …
View article: Lasting biosignatures for 165 million years in lichens detected by multiple spectroscopies and the implication for extreme environmental and exoplanetary life exploring
Lasting biosignatures for 165 million years in lichens detected by multiple spectroscopies and the implication for extreme environmental and exoplanetary life exploring Open
View article: MADAWSD: Multi-Agent Debate Framework for Adversarial Word Sense Disambiguation
MADAWSD: Multi-Agent Debate Framework for Adversarial Word Sense Disambiguation Open
View article: RoDEval: A Robust Word Sense Disambiguation Evaluation Framework for Large Language Models
RoDEval: A Robust Word Sense Disambiguation Evaluation Framework for Large Language Models Open
View article: CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models
CCHall: A Novel Benchmark for Joint Cross-Lingual and Cross-Modal Hallucinations Detection in Large Language Models Open
View article: Missing Traffic Data Imputation with a Conditional Diffusion Framework
Missing Traffic Data Imputation with a Conditional Diffusion Framework Open
View article: A Survey on Training-free Alignment of Large Language Models
A Survey on Training-free Alignment of Large Language Models Open
View article: Constructing Your Model’s Value Distinction: Towards LLM Alignment with Anchor Words Tuning
Constructing Your Model’s Value Distinction: Towards LLM Alignment with Anchor Words Tuning Open
View article: Plan Dynamically, Express Rhetorically: A Debate-Driven Rhetorical Framework for Argumentative Writing
Plan Dynamically, Express Rhetorically: A Debate-Driven Rhetorical Framework for Argumentative Writing Open
View article: Population Pharmacokinetics and Dosing Optimization of Norvancomycin for Chinese Patients with Community-Acquired Pneumonia
Population Pharmacokinetics and Dosing Optimization of Norvancomycin for Chinese Patients with Community-Acquired Pneumonia Open
Age and Scr levels significantly influenced the pharmacokinetic parameters of NVCM in CAP patients. Our model-informed precision dosing approach may help for early optimization of NVCM exposure. Further prospective studies with larger samp…
View article: BianCang: A Traditional Chinese Medicine Large Language Model
BianCang: A Traditional Chinese Medicine Large Language Model Open
The surge of large language models (LLMs) has driven significant progress in medical applications, including traditional Chinese medicine (TCM). However, current medical LLMs struggle with TCM diagnosis and syndrome differentiation due to …
View article: PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment
PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment Open
Reinforcement Learning from Human Feedback (RLHF) has been proven to be an effective method for preference alignment of large language models (LLMs) and is widely used in the post-training process of LLMs. However, RLHF struggles with hand…
View article: Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information Open
Chain-of-Thought (CoT) has become a vital technique for enhancing the performance of Large Language Models (LLMs), attracting increasing attention from researchers. One stream of approaches focuses on the iterative enhancement of LLMs by c…
View article: S <sup>3</sup> Agent: Unlocking the Power of VLLM for Zero-Shot Multi-Modal Sarcasm Detection
S <sup>3</sup> Agent: Unlocking the Power of VLLM for Zero-Shot Multi-Modal Sarcasm Detection Open
Multi-modal sarcasm detection involves determining whether a given multi-modal input conveys sarcastic intent by analyzing the underlying sentiment. Recently, vision large language models have shown remarkable success on various of multi-m…
View article: SECON: Maintaining Semantic Consistency in Data Augmentation for Code Search
SECON: Maintaining Semantic Consistency in Data Augmentation for Code Search Open
Efficient code search techniques are crucial in accelerating software development by aiding developers in locating specific code snippets and understanding code functionalities. This study investigates code search methodologies, focusing o…
View article: CroPrompt: Cross-task Interactive Prompting for Zero-shot Spoken Language Understanding
CroPrompt: Cross-task Interactive Prompting for Zero-shot Spoken Language Understanding Open
Slot filling and intent detection are two highly correlated tasks in spoken language understanding (SLU). Recent SLU research attempts to explore zero-shot prompting techniques in large language models to alleviate the data scarcity proble…
View article: Adaptive Prompt Learning with Negative Textual Semantics and Uncertainty Modeling for Universal Multi-Source Domain Adaptation
Adaptive Prompt Learning with Negative Textual Semantics and Uncertainty Modeling for Universal Multi-Source Domain Adaptation Open
Universal Multi-source Domain Adaptation (UniMDA) transfers knowledge from multiple labeled source domains to an unlabeled target domain under domain shifts (different data distribution) and class shifts (unknown target classes). Existing …
View article: Prevalence and risk factors of early postoperative seizures in patients with glioma: A protocol for meta-analysis and systematic review
Prevalence and risk factors of early postoperative seizures in patients with glioma: A protocol for meta-analysis and systematic review Open
Introduction Early postoperative seizures has been the most common clinical expression in gliomas; however, the incidence and risk factors for early postoperative seizures in gliomas are more controversial. This protocol describes a system…
View article: An empirical study of next-basket recommendations
An empirical study of next-basket recommendations Open
Next Basket Recommender Systems (NBRs) function to recommend the subsequent shopping baskets for users through the modeling of their preferences derived from purchase history, typically manifested as a sequence of historical baskets. Given…
View article: Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination
Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination Open
Meta reinforcement learning (Meta RL) has been amply explored to quickly learn an unseen task by transferring previously learned knowledge from similar tasks. However, most state-of-the-art algorithms require the meta-training tasks to hav…
View article: Integrate prediction of machine learning for single ACoA rupture risk: a multicenter retrospective analysis
Integrate prediction of machine learning for single ACoA rupture risk: a multicenter retrospective analysis Open
Background Statistically, Anterior communicating aneurysm (ACoA) accounts for 30 to 35% of intracranial aneurysms. ACoA, once ruptured, will have an acute onset and cause severe neurological dysfunction and even death. Therefore, clinical …
View article: Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models
Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models Open
Pre-trained vision-language models, e.g., CLIP, working with manually designed prompts have demonstrated great capacity of transfer learning. Recently, learnable prompts achieve state-of-the-art performance, which however are prone to over…
View article: Medical Question Summarization with Entity-driven Contrastive Learning
Medical Question Summarization with Entity-driven Contrastive Learning Open
By summarizing longer consumer health questions into shorter and essential ones, medical question-answering systems can more accurately understand consumer intentions and retrieve suitable answers. However, medical question summarization i…