Hui‐Ling Zhen
YOU?
Author Swipe
View article: Scaling Up, Speeding Up: A Benchmark of Speculative Decoding for Efficient LLM Test-Time Scaling
Scaling Up, Speeding Up: A Benchmark of Speculative Decoding for Efficient LLM Test-Time Scaling Open
Test-time scaling has emerged as a powerful paradigm for enhancing the reasoning capabilities of large language models (LLMs) by allocating additional computational resources during inference. However, this paradigm is inherently inefficie…
View article: Attention-Aware GNN-based Input Defense against Multi-Turn LLM Jailbreak
Attention-Aware GNN-based Input Defense against Multi-Turn LLM Jailbreak Open
Large Language Models (LLMs) have gained significant traction in various applications, yet their capabilities present risks for both constructive and malicious exploitation. Despite extensive training and fine-tuning efforts aimed at enhan…
View article: Accelerating Large Language Model Reasoning via Speculative Search
Accelerating Large Language Model Reasoning via Speculative Search Open
Tree-search-based reasoning methods have significantly enhanced the reasoning capability of large language models (LLMs) by facilitating the exploration of multiple intermediate reasoning steps, i.e., thoughts. However, these methods suffe…
Exposure-response analyses of pemigatinib in patients with myeloid/lymphoid neoplasms with fibroblast growth factor receptor 1 rearrangement Open
Pemigatinib is a selective, potent, orally administered inhibitor of fibroblast growth factor receptor (FGFR)1-3 with antitumor activity in multiple solid tumors. Pemigatinib is used to treat adults with previously treated metastatic or su…
View article: Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging
Unlocking Efficient Long-to-Short LLM Reasoning with Model Merging Open
The transition from System 1 to System 2 reasoning in large language models (LLMs) has marked significant advancements in handling complex tasks through deliberate, iterative thinking. However, this progress often comes at the cost of effi…
View article: PASER: Post-Training Data Selection for Efficient Pruned Large Language Model Recovery
PASER: Post-Training Data Selection for Efficient Pruned Large Language Model Recovery Open
Model pruning is an effective approach for compressing large language models (LLMs). However, this process often leads to significant degradation of model capabilities. While post-training techniques such as instruction tuning are commonly…
View article: Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks
Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks Open
The widespread deployment of pre-trained language models (PLMs) has exposed them to textual backdoor attacks, particularly those planted during the pre-training stage. These attacks pose significant risks to high-reliability applications, …
View article: KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference
KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference Open
KV cache quantization can improve Large Language Models (LLMs) inference throughput and latency in long contexts and large batch-size scenarios while preserving LLMs effectiveness. However, current methods have three unsolved issues: overl…
View article: MixPE: Quantization and Hardware Co-design for Efficient LLM Inference
MixPE: Quantization and Hardware Co-design for Efficient LLM Inference Open
Transformer-based large language models (LLMs) have achieved remarkable success as model sizes continue to grow, yet their deployment remains challenging due to significant computational and memory demands. Quantization has emerged as a pr…
View article: The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation
The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation Open
Logic synthesis is a crucial phase in the circuit design process, responsible for transforming hardware description language (HDL) designs into optimized netlists. However, traditional logic synthesis methods are computationally intensive,…
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation Open
Efficiently determining the satisfiability of a boolean equation -- known as the SAT problem for brevity -- is crucial in various industrial problems. Recently, the advent of deep learning methods has introduced significant potential for e…
View article: GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection
GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection Open
Boolean satisfiability (SAT) problems are routinely solved by SAT solvers in real-life applications, yet solving time can vary drastically between solvers for the same instance. This has motivated research into machine learning models that…
Logic Optimization Meets SAT: A Novel Framework for Circuit-SAT Solving Open
The Circuit Satisfiability (CSAT) problem, a variant of the Boolean Satisfiability (SAT) problem, plays a critical role in integrated circuit design and verification. However, existing SAT solvers, optimized for Conjunctive Normal Form (CN…
View article: The Dawn of AI-Native EDA: Opportunities and Challenges of Large Circuit Models
The Dawn of AI-Native EDA: Opportunities and Challenges of Large Circuit Models Open
Within the Electronic Design Automation (EDA) domain, AI-driven solutions have emerged as formidable tools, yet they typically augment rather than redefine existing methodologies. These solutions often repurpose deep learning models from o…
IB-Net: Initial Branch Network for Variable Decision in Boolean Satisfiability Open
Boolean Satisfiability problems are vital components in Electronic Design Automation, particularly within the Logic Equivalence Checking process. Currently, SAT solvers are employed for these problems and neural network is tried as assista…
DiLA: Enhancing LLM Tool Learning with Differential Logic Layer Open
Considering the challenges faced by large language models (LLMs) in logical reasoning and planning, prior efforts have sought to augment LLMs with access to external solvers. While progress has been made on simple reasoning problems, solvi…
BetterV: Controlled Verilog Generation with Discriminative Guidance Open
Due to the growing complexity of modern Integrated Circuits (ICs), there is a need for automated circuit design methods. Recent years have seen rising research in hardware design language generation to facilitate the design process. In thi…
View article: Machine Learning Insides OptVerse AI Solver: Design Principles and Applications
Machine Learning Insides OptVerse AI Solver: Design Principles and Applications Open
In an era of digital ubiquity, efficient resource management and decision-making are paramount across numerous industries. To this end, we present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Clo…
View article: DeepGate2: Functionality-Aware Circuit Representation Learning
DeepGate2: Functionality-Aware Circuit Representation Learning Open
Circuit representation learning aims to obtain neural representations of circuit elements and has emerged as a promising research direction that can be applied to various EDA and logic reasoning tasks. Existing solutions, such as DeepGate,…
Supplementary Data from Clinicogenomic Analysis of <i>FGFR2</i>-Rearranged Cholangiocarcinoma Identifies Correlates of Response and Mechanisms of Resistance to Pemigatinib Open
Supplemental tables and figures
Data from Clinicogenomic Analysis of <i>FGFR2</i>-Rearranged Cholangiocarcinoma Identifies Correlates of Response and Mechanisms of Resistance to Pemigatinib Open
Pemigatinib, a selective FGFR1–3 inhibitor, has demonstrated antitumor activity in FIGHT-202, a phase II study in patients with cholangiocarcinoma harboring FGFR2 fusions/rearrangements, and has gained regulatory approval in the United Sta…
Data from Clinicogenomic Analysis of <i>FGFR2</i>-Rearranged Cholangiocarcinoma Identifies Correlates of Response and Mechanisms of Resistance to Pemigatinib Open
Pemigatinib, a selective FGFR1–3 inhibitor, has demonstrated antitumor activity in FIGHT-202, a phase II study in patients with cholangiocarcinoma harboring FGFR2 fusions/rearrangements, and has gained regulatory approval in the United Sta…
Supplementary Data from Clinicogenomic Analysis of <i>FGFR2</i>-Rearranged Cholangiocarcinoma Identifies Correlates of Response and Mechanisms of Resistance to Pemigatinib Open
Supplemental tables and figures
Conflict-driven Structural Learning Towards Higher Coverage Rate in ATPG Open
Due to the increasing challenges posed by the relentless rise in the design complexity of integrated circuits, Boolean Satisfiability (SAT) has emerged as a robust alternative to structural APTG techniques. However, the high cost of transf…
HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation and A Strong Structure-Hardness-Aware Baseline Open
Industrial SAT formula generation is a critical yet challenging task. Existing SAT generation approaches can hardly simultaneously capture the global structural properties and maintain plausible computational hardness. We first present an …