Saehyung Lee
YOU?
Author Swipe
View article: 698 Enhancing abscopal effects in colorectal cancer via photothermal therapy combined with Interferon-β
698 Enhancing abscopal effects in colorectal cancer via photothermal therapy combined with Interferon-β Open
View article: CRISPR-Induced NUT Suppression Promotes Differentiation and Enhances Trop2-Targeted Immunocytokine Response in NUT Carcinoma
CRISPR-Induced NUT Suppression Promotes Differentiation and Enhances Trop2-Targeted Immunocytokine Response in NUT Carcinoma Open
NUT carcinoma (NC) is an aggressive malignancy driven by NUTM1 gene rearrangements with limited therapeutic options. Here, we show that direct suppression of NUTM1 using CRISPR/Cas9 induces squamous-like differentiation and upregulates TRO…
View article: Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context
Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context Open
View article: Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage Open
Multimodal large language models (MLLMs) excel at generating highly detailed captions but often produce hallucinations. Our analysis reveals that existing hallucination detection methods struggle with detailed captions. We attribute this t…
View article: Superpixel Tokenization for Vision Transformers: Preserving Semantic Integrity in Visual Tokens
Superpixel Tokenization for Vision Transformers: Preserving Semantic Integrity in Visual Tokens Open
Transformers, a groundbreaking architecture proposed for Natural Language Processing (NLP), have also achieved remarkable success in Computer Vision. A cornerstone of their success lies in the attention mechanism, which models relationship…
View article: Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context
Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context Open
Multi-hop reasoning, which requires multi-step reasoning based on the supporting documents within a given context, remains challenging for large language models (LLMs). LLMs often struggle to filter out irrelevant documents within the cont…
View article: Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection
Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection Open
In our study, we explore methods for detecting unwanted content lurking in visual datasets. We provide a theoretical analysis demonstrating that a model capable of successfully partitioning visual data can be obtained using only textual da…
View article: Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach
Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach Open
In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. …
View article: Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors
Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors Open
Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data. The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA m…
View article: Gradient Alignment with Prototype Feature for Fully Test-time Adaptation
Gradient Alignment with Prototype Feature for Fully Test-time Adaptation Open
In context of Test-time Adaptation(TTA), we propose a regularizer, dubbed Gradient Alignment with Prototype feature (GAP), which alleviates the inappropriate guidance from entropy minimization loss from misclassified pseudo label. We devel…
View article: DAFA: Distance-Aware Fair Adversarial Training
DAFA: Distance-Aware Fair Adversarial Training Open
The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem. Existing methodologies aimed to enhance robust fairness by sacrificing the model's pe…
View article: On mitigating stability-plasticity dilemma in CLIP-guided image morphing via geodesic distillation loss
On mitigating stability-plasticity dilemma in CLIP-guided image morphing via geodesic distillation loss Open
Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models. However, existing CLIP-guided image morphing methods encounte…
View article: On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection
On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection Open
Successful detection of Out-of-Distribution (OoD) data is becoming increasingly important to ensure safe deployment of neural networks. One of the main challenges in OoD detection is that neural networks output overconfident predictions on…
View article: Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training
Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training Open
Several recent studies have shown that the use of extra in-distribution data can lead to a high level of adversarial robustness. However, there is no guarantee that it will always be possible to obtain sufficient extra data for a selected …
View article: Dataset Condensation with Contrastive Signals
Dataset Condensation with Contrastive Signals Open
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensation (DC), methods can achieve state-of-the-art performance when applied to data-efficient learning tasks. However, in this study, we prove…
View article: Removing Undesirable Feature Contributions Using Out-of-Distribution\n Data
Removing Undesirable Feature Contributions Using Out-of-Distribution\n Data Open
Several data augmentation methods deploy unlabeled-in-distribution (UID) data\nto bridge the gap between the training and inference of neural networks.\nHowever, these methods have clear limitations in terms of availability of UID\ndata an…
View article: Removing Undesirable Feature Contributions Using Out-of-Distribution Data
Removing Undesirable Feature Contributions Using Out-of-Distribution Data Open
Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability of UID data and d…
View article: Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization Open
Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Although adversarial training is one of the most effective forms of defense against adversarial examples, unfortunately, a large gap exists betwe…
View article: Adversarial Vertex Mixup: Toward Better Adversarially Robust\n Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust\n Generalization Open
Adversarial examples cause neural networks to produce incorrect outputs with\nhigh confidence. Although adversarial training is one of the most effective\nforms of defense against adversarial examples, unfortunately, a large gap\nexists be…
View article: Development of Human Monoclonal Antibody for Claudin-3 Overexpressing Carcinoma Targeting
Development of Human Monoclonal Antibody for Claudin-3 Overexpressing Carcinoma Targeting Open
Most malignant tumors originate from epithelial tissues in which tight junctions mediate cell–cell interactions. Tight junction proteins, especially claudin-3 (CLDN3), are overexpressed in various cancers. Claudin-3 is exposed externally d…
View article: A Glycoengineered Interferon-β Mutein (R27T) Generates Prolonged Signaling by an Altered Receptor-Binding Kinetics
A Glycoengineered Interferon-β Mutein (R27T) Generates Prolonged Signaling by an Altered Receptor-Binding Kinetics Open
The glycoengineering approach is used to improve biophysical properties of protein-based drugs, but its direct impact on binding affinity and kinetic properties for the glycoengineered protein and its binding partner interaction is unclear…