Tongxuan Liu
YOU?
Author Swipe
View article: From Hypothesis to Premises: LLM-based Backward Logical Reasoning with Selective Symbolic Translation
From Hypothesis to Premises: LLM-based Backward Logical Reasoning with Selective Symbolic Translation Open
Logical reasoning is a core challenge in natural language understanding and a fundamental capability of artificial intelligence, underpinning scientific discovery, mathematical theorem proving, and complex decision-making. Despite the rema…
View article: TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection
TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection Open
Large Vision-Language Models have demonstrated remarkable performance across various tasks; however, the challenge of hallucinations constrains their practical applications. The hallucination problem arises from multiple factors, including…
View article: S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency Open
Large language models (LLMs) have demonstrated remarkable capabilities across various natural language processing (NLP) scenarios, but they still face challenges when handling complex arithmetic and logical reasoning tasks. While Chain-Of-…
View article: S2-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
S2-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency Open
View article: DCP: Dual-Cue Pruning for Efficient Large Vision-Language Models
DCP: Dual-Cue Pruning for Efficient Large Vision-Language Models Open
View article: Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models Open
View article: FoPru: Focal Pruning for Efficient Large Vision-Language Models
FoPru: Focal Pruning for Efficient Large Vision-Language Models Open
Large Vision-Language Models (LVLMs) represent a significant advancement toward achieving superior multimodal capabilities by enabling powerful Large Language Models (LLMs) to understand visual input. Typically, LVLMs utilize visual encode…
View article: Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach
Leveraging LLMs for Hypothetical Deduction in Logical Inference: A Neuro-Symbolic Approach Open
Large Language Models (LLMs) have exhibited remarkable potential across a wide array of reasoning tasks, including logical reasoning. Although massive efforts have been made to empower the logical reasoning ability of LLMs via external log…
View article: Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models Open
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can imp…
View article: GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion Open
In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought wit…
View article: Multi-group Uncertainty Quantification for Long-form Text Generation
Multi-group Uncertainty Quantification for Long-form Text Generation Open
While past works have shown how uncertainty quantification can be applied to large language model (LLM) outputs, the question of whether resulting uncertainty guarantees still hold within sub-groupings of data remains open. In our work, gi…