Hyesu Lim
YOU?
Author Swipe
View article: CytoSAE: Interpretable Cell Embeddings for Hematology
CytoSAE: Interpretable Cell Embeddings for Hematology Open
Sparse autoencoders (SAEs) emerged as a promising tool for mechanistic interpretability of transformer-based foundation models. Very recently, SAEs were also adopted for the visual domain, enabling the discovery of visual concepts and thei…
View article: Sparse autoencoders reveal selective remapping of visual concepts during adaptation
Sparse autoencoders reveal selective remapping of visual concepts during adaptation Open
Adapting foundation models for specific purposes has become a standard approach to build machine learning systems for downstream applications. Yet, it is an open question which mechanisms take place during adaptation. Here we develop a new…
View article: Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering
Translation Deserves Better: Analyzing Translation Artifacts in Cross-lingual Visual Question Answering Open
Building a reliable visual question answering~(VQA) system across different languages is a challenging problem, primarily due to the lack of abundant samples for training. To address this challenge, recent studies have employed machine tra…
View article: Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Towards Calibrated Robust Fine-Tuning of Vision-Language Models Open
Improving out-of-distribution (OOD) generalization during in-distribution (ID) adaptation is a primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However, despite decent OOD generalization performance from rec…
View article: PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration Open
Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of Do…
View article: TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation
TTN: A Domain-Shift Aware Batch Normalization in Test-Time Adaptation Open
This paper proposes a novel batch normalization strategy for test-time adaptation. Recent test-time adaptation methods heavily rely on the modified batch normalization, i.e., transductive batch normalization (TBN), which calculates the mea…
View article: PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration Open
Document-level relation extraction (DocRE)aims to extract relations of all entity pairs in a document.A key challenge in DocRE is the cost of annotating such data which requires intensive human effort.Thus, we investigate the case of DocRE…
View article: AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain
AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain Open
During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated. The vocabulary generated based on the pretrained data is suboptimal for downstream data when domain discrep…
View article: AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain
AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain Open
During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated. The vocabulary generated based on the pretrained data is suboptimal for downstream data when domain discrep…